Prosecution Insights
Last updated: April 19, 2026
Application No. 18/355,064

Multiple Neural Network Training Nodes in a Read Channel

Non-Final OA §103
Filed
Jul 19, 2023
Examiner
ROSTAMI, MOHAMMAD S
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
Western Digital Technologies Inc.
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
3y 10m
To Grant
93%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
425 granted / 635 resolved
+11.9% vs TC avg
Strong +26% interview lift
Without
With
+26.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
37 currently pending
Career history
672
Total Applications
across all art units

Statute-Specific Performance

§101
21.3%
-18.7% vs TC avg
§103
54.9%
+14.9% vs TC avg
§102
9.7%
-30.3% vs TC avg
§112
4.4%
-35.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 635 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-20 are pending of which claims 1, 11, 12, and 20 are in independent form. Claim 20 is subject to claim interpretation. Claims 1-20 are rejected under 35 U.S.C. 103. Examiner’s Note (35 USC 101 Abstract Idea) Regarding claims 1, 11, 12 and 20: With respect to step 2A, prong one (Judicial Exception), the claims recite an abstract idea, law of nature, or natural phenomenon. Specifically, the following limitations recite mathematical concepts and/or mental processes and/or certain methods of organizing human activity. The claim recites a read channel circuit including: A first NN circuit that: receiving a read data signal; modifies the signal using trained node coefficient; A second NN circuit that: process the modified signal; determines an output read data signal; Output to a soft output detector. The claims are directed to signal processing in a read channel circuit. The NN are implemented as circuit, not merely mathematical models. The operations are applied to physical read data signals in a storage/read channel context. While the NN involve mathematics, it is reasonable to recognize, that mathematical operations integrated into a specific technological process (e.g. signal processing in hardware) are typically no abstract. These claims focus on improving read channel signal detection which is a technological domain. The claims are not directed to judicial exception. With respect to step 2A, Prong Two (Particular Application), even assuming arguendo that NN processing implicates mathematics: The claims clearly integrate the processing into a practical technological application: Operates on read data signals, Used NN circuits, Produces signals for a soft output detector, Situated in a read channel circuit architecture. This reflects: Specific signal processing pipeline, Hardware oriented implementation, Improvements to data detection in storage systems. The claims are integrated into a practical application. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “means for generating…”; “a first means for: receiving a first input …; and modifying…”; “a second means for: receiving a second input …; determining,… and outputting”, in claim 20. The corresponding structure is described in ¶ [0006]-[0010] of the specification, reciting: “a read channel circuit that includes a first neural network circuit configured to: receive a first input read data signal corresponding to at least one data symbol; and modify, based on a first neural network configuration and a first set of trained node coefficients, the input read data signal to a first modified read data signal. The read channel circuit also includes a second neural network circuit configured to: receive a second input read data signal... A data storage device may include the read channel circuit, a non-volatile storage medium, and an analog-to-digital converter configured to generate the first input read data signal based on data read from the non-volatile storage medium”, means are tied to hardware/tangible devices. Therefore, regarding Claim 20 is not subject to 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the limitation “means…” (placeholder) recites hardware component performing a list of purely functioning operations, reciting corresponding structure in the claims/specification. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 5-13, and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over BELZER; Benjamin Joseph et al. (US 20200389188 A1) [Belzer] in view of BERMAN; AMIT et al. (US 20220293192 A1) [Berman]. Regarding claims 1, 11, 12 and 20, Belzer discloses, a read channel circuit (NN signal processor for magnetic storage channels; NN equalizer in read/write channel; turbo-equalization detection architecture ¶ [0010]-[0017], [0043] and [0112]), comprising: a first neural network circuit configured to: receive a first input read data signal corresponding to at least one data symbol (DNN/CNN/LSTM based detector receiving filtered waveform inputs; vector r including a plurality of readback sample; NN equalizer processing reach channel samples; symbol detection from storage channel signals; NN media noise predictor receiving filtered waveform inputs ¶ [0016], [0017], [0043], [0063], [0071], [0076], these waveform/readback inputs correspond to the claimed input read data signal associated with data symbols); and modify, based on a first neural network configuration and a first set of trained node coefficients, the input read data signal to a first modified read data signal (equalizer filtered output provided to detector; NN layers performing featured extraction and transformation; CNN equalizer performing cross-track ISI equalizations ¶ [0057], [0113], [0131]; these section explain the NN is trained (node weights=trained coefficient), performs equalization/filtering; outputs processes equalized signals. ); [a second neural network circuit] configured to: receive a second input read data signal based on the first modified read data signal (equalizer output fed to trellis-based detector; DNN APP detector outputs passed to channel decoder and iterated; multi-stage CNN/DNN architecture ¶ [0057], [0112], [0131]. These show multi stage processing where downstream detection operates on equalized signals); determine, based on a second neural network configuration and a second set of node coefficients, an output read data signal corresponding to the at least one data symbol (CNN-based BCJR-DNN detector producing symbol estimates; DNN APP detector outputs LLP values or coded bits; DNN predictor working with trellis detector ¶ [0063], [0112], [0043]; these determinations use trained network weights (node coefficient)), and output the output read data signal to a soft output detector for determining the at least one data symbol (soft-output Viterbi/BCJR detector; soft-input soft-output channel decoder exchanging LLRs; turbo detector exchanging likelihood information ¶ [0057], [0112], [0063]). However, Belzer does not explicitly facilitate a second neural network circuit. Berman discloses, a second neural network circuit (plurality of neural networks; multiple threshold networks; multiple shallow ML models; selection among networks ¶ [0103], [0105], [0118], [0124]; these sections establish multiple distinct NN circuits). It would have been obvious to one ordinary skilled in the art at the time of the present invention to combine the teachings of the cited references because Berman's system would have allowed Belzer to facilitate a second neural network circuit. The motivation to combine is apparent in the Belzer’s reference, because there is a need for reliable, low power multi-cell memory systems for use in mobile electronic devices. Combining known ML equalization and SISO decoding techniques would improve error-rate performance. Regarding claims 2 and 13, the combination of Belzer and Berman discloses, the first neural network circuit comprises a first training node configured to train the first set of trained node coefficients based on first node training logic (Belzer: DNN/CNN/LSTM used in the read channel; networks having learned weights/parameters; training of the neural network detectors ¶ [0063], [0071], [0075]; these sections teach neural network weights are learned, coefficients are trained, training logic exists); the second neural network circuit comprises a second training node (Berman: plurality of neural networks; multiple threshold networks; multiple shallow ML models; selection among networks ¶ [0103], [0105], [0118], [0124]; these sections establish multiple distinct NN circuits) configured to train the second set of trained node coefficients based on second node training logic (Belzer: DNN/CNN/LSTM used in the read channel; networks having learned weights/parameters; training of the neural network detectors ¶ [0063], [0071], [0075]; these sections teach neural network weights are learned, coefficients are trained, training logic exists); and the first node training logic and the second node training logic are different (Berman: distinct NN nodes/circuits ¶ [0103], [0105], [0118], [0124]). Regarding claim 3, the combination of Belzer and Berman discloses, the first node training logic comprises: a first input read signal type (Belzer: DNN receives inputs; APP detector outputs LLRs used by DNN; CNN operates on stacked LLR input variables ¶ [0043], [0112], [0084]); a first target output value type (regression layer predicts media noise response; detector outputs LLR estimates; detector performs evaluation ¶ [0082]; [0112], [0063]); a first loss function (regression loss function tied to training label and prediction ¶ [0082]); a first training data source (simulated datasets; waveform simulations; HDD waveform data ¶ [0063], [0112]-[0113]); and at least one first training condition (stochastic gradient decent training; iterative decoding environment; turbo iteration ¶ [0074], [0112]); the second node training logic (plurality of threshold networks trained for different weak decision voltage range; each network corresponding to different voltage range; weights adjusted during training ¶ [0096], [0097], [0077]) comprises: a second input read signal type (Belzer: DNN receives inputs; APP detector outputs LLRs used by DNN; CNN operates on stacked LLR input variables ¶ [0043], [0112], [0084]); a second target output value type (regression layer predicts media noise response; detector outputs LLR estimates; detector performs evaluation ¶ [0082]; [0112], [0063]); a second loss function (regression loss function tied to training label and prediction ¶ [0082]); a second training data source (simulated datasets; waveform simulations; HDD waveform data ¶ [0063], [0112]-[0113]); and at least one second training condition (stochastic gradient decent training; iterative decoding environment; turbo iteration ¶ [0074], [0112]); and the first node training logic is different from the second node training logic based on at least one difference between at least one of: an input read signal type; a target output value type; a loss function; a training data source; and at least one training condition (FC-DNN, CNN, LSTM ¶ [0063], [0071], [0075] are trained differently). Regarding claims 5 and 15, the combination of Belzer and Berman discloses, wherein the first node training logic is configured to train the first set of node coefficients (Belzer: network weight are trainable parameters; trained via supervised learning; optimized using loss minimization ¶ [0073]-[0075], [0082]) and the second node training logic is configured to train the second set of node coefficients (Berman: plurality of threshold networks; each network has its own trained weights; each network trained for its voltage range ¶ [0077], [0096]-[0097]) using at least one of: stored training data comprising a known sequence of data symbols; runtime training data based on a sequence of data symbols determined by the read channel circuit and a corresponding read data signal; and runtime training data based on at least one data symbol determined by hard decisions from the soft output detector and a corresponding read data signal (Belzer: GFP dataset used for experiments; supervised training with labeled data; known ground truth symbol sequences; iterative detection loop; detector outputs fed back, adaptive detection process; LLR outputs from neural detectors; soft outputs used by downstream decoding; detector produces symbol estimates ¶ [0063], [0082], [0084], [0112]). Regarding claims 6 and 16, the combination of Belzer and Berman discloses, wherein the first neural network circuit is configured as a waveform combiner (Belzer: CNN applies FIR filters to input data; convolution layers process waveform inputs; detector processes channel sample ¶ [0084], [0073]-[0075], [0063]. These sections specify receiving multiple waveform samples then filters and aggregates them) and further configured to: receive a third input read data signal (Belzer: three input variables (LLR, etc,); fully connected layer receives inputs; multi-dimensional input tensors ¶ [0084], [0073], [0075], the feature map to first input, second input, additional channel filters); and combine the first input read data signal and the second input read data signal (Belzer: stalking input variables into NxN array; convolution layers aggregated inputs; affine combination of inputs ¶ [0084], [0075], [0073]) to modify the first input read data signal to the first modified read data signal (Belzer: detector outputs symbol estimates; network predicts response/media noise; iterative improvement of detection ¶ [0063], [0082], [0112]). Regarding claims 7 and 17, the combination of Belzer and Berman discloses, the second neural network circuit (Belzer: each trellis stage is provided by a DNN; DNN media noise predictor used in detection; LLR inputs provided to the BCR-DNN detector ¶ [0066], [0007], [0092]) is configured as a state detector (number of super-trellis states; trellis detector/BCJR/SOVA detectors; LLR/state block associated with trellis detector ¶ [0007], [0058], [0027]-[0028]. Each trellis stage is provided by a DNN…for each trellis branch ¶ [0066]); the output read data signal comprises a vector of possible states for the at least one data symbol (Belzer: trellis-based detector (BCJR/SOVA) produces soft information (LLR-type outputs) used for decoding ¶ [0058]. Decoder outputs LLRs that are passed onward (soft information representing multiple hypothesis) ¶ [0112]); and the soft output detector is configured to populate a decision matrix based on the vector of possible states for determining the at least one data symbol (Belzer: explicitly calls out BCJR/SOVA trellis detector (canonical soft-output detectors) ¶ [0058]; discusses super-trellis states (state-space over which detection is performed) ¶ [0007]; DNN used at trellis stages/branches, consistent with computing state/branch metrics used for decisions ¶ [0066]). Regarding claims 8 and 18, the combination of Belzer and Berman discloses, the first neural network circuit is configured as an equalizer (Belzer: DNN turbo equalization architecture; NN performs detection and equalization; two-dimensional NN equalizer ¶ [0003], [0012], [0015]); and modifying the input read data signal to the first modified read data signal comprises equalizing the input read data signal (Belzer: waveform equalizer formed of NN; NN equalizer mitigates interface; turbo-equalization loop using DNN ¶ [0013], [0014], [0021]). Regarding claims 9 and 19, the combination of Belzer and Berman discloses, a third neural network circuit configured as a parameter estimator (Belzer: DNN media noise predictor configured; predictor models media noise ¶ [0001], [0020], [0021]) and configured to: receive a third input read data signal corresponding to the at least one data symbol (Belzer: DNN receiving signal samples; equalizer receives readback signal; LLR inputs derived from read signal ¶ [0043], [0066], [0093]); and determine, based on a third neural network configuration and a third set of trained node coefficients, an estimated parameter for modifying processing of the output read data signal (Belzer: noise estimations for improved detection; DNN media noise predictor; BER improvement using predicted statistics ¶ [0002], [0020], [0093]); and adjustment logic configured to update, based on the estimated parameter, a corresponding operating parameter for the read channel circuit to modify processing of the output read data signal (Belzer: improved BER via modeling; predictor used in prediction architecture; system adapts based on statistics ¶ [0002], [0021], [0093]). Regarding claim 10, the combination of Belzer and Berman discloses, a plurality of intermediate neural network circuits (Belzer: DNN containing multiple layers; fully connected NN layers ¶ [0010], [0032]-[0037]), wherein each intermediate neural network circuit of the plurality of intermediate neural network circuits (Belzer: layers trained with weights and coefficients; fully connected layer operations; softmax layer applying learned parameters ¶ [0010], [0037], [0116]; examiner specifies that each hidden layer performs NN computation, each has trained coefficient) is configured to: receive at least one input read data signal corresponding to the at least one data symbol (Belzer: detector receives a readback waveform; read channel signal processing; PR linear equalizer receives samples read from HDD [Abstract], ¶ [0021], [0066]); and modify, based on a corresponding neural network configuration and a corresponding set of trained node coefficients, processing of the output read data signal (Belzer: Training weighted coefficient; fully connected layers apply learned weights; DNN based detection improves BER; NN modeling and detection ¶ [0010], [0037], [0064], [Abstract]). Claim(s) 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Belzer in view of Berman in view of Arora; Ankur et al. (US 20220101327 A1) [Arora]. Regarding claims 4 and 14, the combination of Belzer and Berman disclose, the first node training logic [is configured to retrain] the first set of trained node coefficients [on a first time constant] (Belzer: NN detector/equalizer (NN detector configured to predict/cancel media noise); trained coefficient (weights) (DNN used for detection and decoding); training of NN nodes (DNN based APP multi-track detector generates LLRs using trained network), ¶ [0019], [0021], [0052]); the second node training logic [is configured to retrain] the second set of trained node coefficients [on a second time constant] (Belzer: NN detector/equalizer (NN detector configured to predict/cancel media noise); trained coefficient (weights) (DNN used for detection and decoding); training of NN nodes (DNN based APP multi-track detector generates LLRs using trained network), ¶ [0019], [0021], [0052]). However, neither Belzer nor Berman explicitly facilitate on a second time constant; the first time constant is different than the second time constant. Arora discloses, configured to retrain… on a first time constant; configured to retrain… on a second time constant (NN is retained; retraining occurs after a fixed time interval; interval defined by server/resource/accuracy ¶ [0050], [0116], [0131], and [0153]; these sections tech period retraining after fixed time interval); the first time constant is different than the second time constant (fixed time interval and first time interval ¶ [0050], [0116], [0131], and [0153]). It would have been obvious to one ordinary skilled in the art at the time of the present invention to combine the teachings of the cited references because Arora's system would have allowed Belzer and Berman to facilitate on a second time constant; the first time constant is different than the second time constant. The motivation to combine is apparent in the Belzer and Berman’s reference, because there is a need for a technical solution that solves and captures complex interdependencies in transaction data for accurate detection of specific transactions. Conclusion The examiner requests, in response to this Office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line no(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application. When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections See 37 CFR 1.111(c). Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD S ROSTAMI whose telephone number is (571)270-1980. The examiner can normally be reached Mon-Fri From 9 a.m. to 5 p.m.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached at (571)270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. 2/26/2026 /MOHAMMAD S ROSTAMI/ Primary Examiner, Art Unit 2154
Read full office action

Prosecution Timeline

Jul 19, 2023
Application Filed
Feb 27, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596705
CHANGE CONTROL AND VERSION MANAGEMENT OF DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12579127
DETECTING LABELS OF A DATA CATALOG INCORRECTLY ASSIGNED TO DATA SET FIELDS
2y 5m to grant Granted Mar 17, 2026
Patent 12561392
RELATIVE FUZZINESS FOR FAST REDUCTION OF FALSE POSITIVES AND FALSE NEGATIVES IN COMPUTATIONAL TEXT SEARCHES
2y 5m to grant Granted Feb 24, 2026
Patent 12561360
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY RECORDING MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12561312
DISTRIBUTED STREAM-BASED ACID TRANSACTIONS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
93%
With Interview (+26.3%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 635 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month