Prosecution Insights
Last updated: April 19, 2026
Application No. 17/353,297

APPARATUS AND METHOD FOR SOURCE SEPARATION USING AN ESTIMATION AND CONTROL OF SOUND QUALITY

Final Rejection §103
Filed
Jun 21, 2021
Examiner
SONIFRANK, RICHA MISHRA
Art Unit
2654
Tech Center
2600 — Communications
Assignee
Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.
OA Round
6 (Final)
66%
Grant Probability
Favorable
7-8
OA Rounds
3y 3m
To Grant
91%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
250 granted / 379 resolved
+4.0% vs TC avg
Strong +25% interview lift
Without
With
+24.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
29 currently pending
Career history
408
Total Applications
across all art units

Statute-Specific Performance

§101
16.6%
-23.4% vs TC avg
§103
56.1%
+16.1% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
8.2%
-31.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 379 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been retrieved and the application has been accorded the priority benefit of EP18215707.3, filed on 12/21/2018. Response to Amendment No claims are amended. claims 2-4 are cancelled. Claims 1, and 5-19 are presented are presented for examination. Response to Arguments Applicant arguments filed 8/18/2025 have been reviewed. Following are the response: Response to §112(f) Statements Applicant is advised to change the language or the use of processor and memory to not invoke 112 (f). At this time 112(f) is still invoked due to the language. Refer to MPEP 2181 Based on the claim language and MPEP § 2181, the current claims are properly construed as "means-plus-function" limitations invoking 35 U.S.C. 112(f). Similar to the examples in MPEP § 2181 (e.g., "means for"), the claim limitations utilize generic, functional placeholders—specifically "source separator”, “determining module” and “signal processor” associated with functional language, which are mere functional modules. While the examiner’s interpretation suggests these limitations are performed by a specific structure, this interpretation does not overcome the 112(f) presumption because the claim does not recite sufficient structure for performing the function. Under MPEP 2181, these functional modules, which lack specific structural definition in the claim itself, must be interpreted under 35 U.S.C. 112(f) to cover only the corresponding algorithms or structures described in the specification. Addition see the example provided in MPEP 2181 - With respect to the first prong of this analysis, a claim element that does not include the term “means” or “step” triggers a rebuttable presumption that 35 U.S.C. 112(f) does not apply. When the claim limitation does not use the term “means,” examiners should determine whether the presumption that 35 U.S.C. 112(f) does not apply is overcome. The presumption may be overcome if the claim limitation uses a generic placeholder (a term that is simply a substitute for the term “means”). The following is a list of non-structural generic placeholders that may invoke 35 U.S.C. 112(f): “mechanism for,” “module for,” “device for,” “unit for,” “component for,” “element for,” “member for,” “apparatus for,” “machine for,” or “system for.”Welker Bearing Co., v. PHD, Inc., 550 F.3d 1090, 1096, 89 USPQ2d 1289, 1293-94 (Fed. Cir. 2008); Mass. Inst. of Tech. v. Abacus Software, 462 F.3d 1344, 1354, 80 USPQ2d 1225, 1228 (Fed. Cir. 2006); Personalized Media, 161 F.3d at 704, 48 USPQ2d at 1886–87; Mas-Hamilton Group v. LaGard, Inc., 156 F.3d 1206, 1214-1215, 48 USPQ2d 1010, 1017 (Fed. Cir. 1998). Note that there is no fixed list of generic placeholders that always result in 35 U.S.C. 112(f) interpretation, and likewise there is no fixed list of words that always avoid 35 U.S.C. 112(f) interpretation. Every case will turn on its own unique set of facts. 35 U.S.C. § 103 Applicant argues “Claim 1 recites, inter alia, "wherein the signal processor is configured to generate the separated audio signal depending on the one or more parameter values and depending on a linear combination of the estimated target signal and the estimated residual signal." The Office acknowledges that Jeong does not teach this feature and relies on Andersen, paragraph [0169], to supply the missing limitation. However, Andersen does not disclose this feature. Andersen's paragraph [0169] discloses that "the presented signal ulocal is given by ulocal=aylocal+(1-a)xwireless, where ylocal is the microphone signal of the hearing aid user (local=left or right), and Xwireless is the signal (=signal x in FIG. 6A, 6B, 6C, 6D) picked up at the target talker (TLK) and wirelessly transmitted to the hearing aid(s), and 0<=a<=1 is a free parameter." To properly map Andersen's disclosure to claim 1, the claim language must be considered. Claim 1 defines: " An apparatus for generating a separated audio signal from an audio input signal; " The audio input signal comprises a target audio signal portion and a residual audio signal portion; " The residual audio signal portion indicates a residual between the audio input signal and the target audio signal portion. Under this claim construction, if Andersen's ulocal is to be the "separated audio signal" of claim 1, then the linear combination must be of "the estimated target signal and the estimated residual signal" as recited in claim 1. However, in Andersen's formula ulocal=aylocal+(1-a)xwireless: e ylocal is "the microphone signal of the hearing aid user." Andersen, paragraph [0169] "Xwireless is "the signal picked up at the target talker (TLK) and wirelessly transmitted to the hearing aid(s)." Andersen, paragraph [0169] Thus, ylocal is a microphone signal (which would correspond to an input mixture signal, not an estimated target or residual signal), and xwireless is a signal picked up at the target talker location (which would correspond to a clean target signal, not an estimated residual signal). Andersen's linear combination is therefore a combination of a microphone signal (ylocal) and a wirelessly-received target talker signal (xwireless). This is not a linear combination of "the estimated target signal and the estimated residual signal" as required by claim 1. Neither yiocai nor xwireless corresponds to an "estimated residual signal" as defined by claim 1, i.e., "an estimate of a signal that only comprises the residual audio signal portion."” However, Jeong already teaches the concept of operations including source separation are performed repeatedly depending on parameter values like those shown in Equations 5 through 7 of paragraph [0061] and depending eventually on input signal, target signal and/or residual signal -the latter being, in Equation 4, anything but the target signal. Anderson is relied upon to teach the concept of calculating signal based on some linear combination which is -- the ylocal is an input signal and xwireless is the target signal, ulocal corresponds to the mixed signal which by manipulating “a” will result in separated signal, hence providing speech intelligibility. Additionally examiner relied on Faller to teach (A residual audio signal (v(t)) is a difference between the input audio signal (101) and the summed audio signal (y(t)) A virtual bass signal (w(t)) comprises one or more harmonics of the residual audio signal (v(t)). An output audio signal (103) is generated by summing the summed audio signal (y(t)) and the virtual bass signal (w(t)), Abstract) hence the combination of Jeong ( US Pub: 20170251320) and further in view of Andersen (US Pub: 20170272870) and further in view of Faller ( US Pub: 20190098407) to teach the entire concept. Examiner suggested to take out the alternate language and explain the control parameter as described in the specification to place this application in condition for allowance. Examiner has called the applicant representative and at this time they don’t want to take examiner suggestion and want to see the office action. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 5-6, and 14-19 are rejected under 35 U.S.C. 103 as being unpatentable over Jeong ( US Pub: 20170251320) and further in view of Andersen (US Pub: 20170272870) and further in view of Faller ( US Pub: 20190098407) Regarding claim 1, Jeong teaches an apparatus for generating a separated audio signal from an audio input signal ( separator, Fig 1, Para 0044) , wherein the audio input signal comprises a target audio signal portion and a residual audio signal portion, wherein the residual audio signal portion indicates a residual between the audio input signal and the target audio signal portion ( the mixture signal is an input to the separator, Para 0043-0045, Fig 1) , wherein the apparatus comprises: a source separator for determining an estimated target signal which depends on the audio input signal, the estimated target signal being an estimate of a signal that only comprises the target audio signal portion (separate the sound sources, Para 0043, 0058, Fig 2) , a determining module, wherein the determining module is configured to determine one or more result values depending on an estimated sound quality of the estimated target signal to acquire one or more parameter values, wherein the one or more parameter values are the one or more result values or depend on the one or more result values ( evaluate the sound quality, Para 0058-0065, Fig 2) , and a signal processor for generating the separated audio signal depending on the one or more parameter values and depending on at least one of the estimated target signal and the audio input signal and an estimated residual signal, the estimated residual signal being an estimate of a signal that only comprises the residual audio signal portion((figure 2 and paragraphs [0045] and [0062]: operations including source separation are performed repeatedly depending on parameter values like those shown in Equations 5 through 7 of paragraph [0061] and depending eventually on input signal, target signal and/or residual signal -the latter being, in Equation 4, anything but the target signal), wherein the determining module is configured to estimate, depending on the estimated residual signal, a sound quality value as the one or more result values, wherein the sound quality value indicates the estimated sound quality of the estimated target signal, and wherein the determining module is configured to determine the one or more parameter values depending on the sound quality value; ( When the objective evaluation index defined in operation 250 is less than a preset threshold value in operation 260, the multilingual audio content creating apparatus 100 adjusts the signal intensity and the azimuth angle of each of the sound sources in operation 280. Subsequently, the multilingual audio content creating apparatus 100 may generate the new left stereo signal S.sub.L(t) and the right stereo signal S.sub.R(t) and evaluate the sound quality of each of the sound sources by separating the sound sources. The multilingual audio content creating apparatus 100 may repeatedly perform operations 230 through 260 until the objective evaluation index of each of the sound sources is greater than or equal to the preset threshold, Para 0062);); y n = p 1 s ^ n + 1 - p 1 b ^ n s ^ b ^   Jeong does not teach wherein the signal processor is configured to generate the separated audio signal depending on the one or more parameter values and depending on a linear combination of the estimated target signal and the audio input signal; or wherein the signal processor is configured to generate the separated audio signal depending on the one or more parameter values and depending on a linear combination of the estimated target signal and the estimated residual signal However Andersen teaches wherein the signal processor is configured to generate the separated audio signal depending on the one or more parameter values and depending on a linear combination of the estimated target signal and the estimated residual signal ( linear combination of input signal ( microphone signal) and target talker, Para 0169; Fig 6A-6D) It would have been obvious having the teachings of Jeong to further include the concept of Andersen before effective filing date to optimize speech intelligibility Jeong modified by Andersen does not explicitly teaches residual signal, however its known in the art that residual signal depends on the input signal and Faller teaches the concept of residual signal (A residual audio signal (v(t)) is a difference between the input audio signal (101) and the summed audio signal (y(t)) A virtual bass signal (w(t)) comprises one or more harmonics of the residual audio signal (v(t)). An output audio signal (103) is generated by summing the summed audio signal (y(t)) and the virtual bass signal (w(t)), Abstract) It would have been obvious having the teachings of Jeong and Andersen to further modify with the concept of Faller before effective filing date since its known in the art that residual signal can be calculated from the input signal to improve the perceived quality of the audio signal Regarding claim 5, Jeong as above in claim 4, wherein the signal processor is configured to generate the separated audio signal by determining a first version of the separated audio signal and by modifying the separated audio signal one or more times to acquire one or more intermediate versions of the separated audio signal, wherein the determining module is configured to modify the sound quality value depending on one of the one or more intermediate values of the separated audio signal, and wherein the signal processor is configured to stop modifying the separated audio signal, if sound quality value is greater than or equal to a defined quality value ( repeatedly perform the operation, Para 0062) Regarding claim 6, Jeong modified by Jensen as above in claim 1, teaches wherein the determining module is configured to determine the one or more result values depending on the estimated target signal and depending on at least one of the audio input signal and the estimated residual signal ( fig 2) Regarding claim 14, Jeong as above in claim 1, teaches wherein the signal processor is configured to generate the separated audio signal depending on the one or more parameter values and depending on a postprocessing of the estimated target signal ( after the energy adjustment, Fig 2) Regarding claim 15, arguments analogous to claim 1, are applicable. In addition Jeong teaches A method for generating a separated audio signal from an audio input signal ( abstract) Regarding claim 16, arguments analogous to claim 1, are applicable. In addition Jeong teaches a non-transitory digital storage medium having a computer program stored thereon to perform the method for generating a separated audio signal from an audio input signal ( Para 0088-0089) Regarding claim 17, arguments analogous to claim 1, are applicable. Regarding claim 18, arguments analogous to claim 1, are applicable. Regarding claim 19, arguments analogous to claim 1, are applicable. Claims 7-13 are rejected under 35 U.S.C. 103 as being unpatentable over Jeong ( US Pub: 20170251320) and further in view of Jensen ( US Pub: 20190378531) and further in view of Faller ( US Pub: 20190098407) and further in view of Yan( Perceptually Guided Speech Enhancement Using Deep Neural Networks) Regarding claim 7, Jeong as as above in claim 1, does not teach wherein the determining module comprises an artificial neural network for determining the one or more result values depending on the estimated target signal, wherein the artificial neural network is configured to receive a plurality of input values, each of the plurality of input values depending on at least one of the estimated target signal and the estimated residual signal and the audio input signal, and wherein the artificial neural network is configured to determine the one or more result values as one or more output values of the artificial neural network However Yan teaches the determining module comprises an artificial neural network for determining the one or more result values depending on the estimated target signal( loss based on target, Fig 1, Experimental) , wherein the artificial neural network is configured to receive a plurality of input values, each of the plurality of input values depending on at least one of the estimated target signal and the estimated residual signal and the audio input signal (backpropagation based on target, Under Introduction) , and wherein the artificial neural network is configured to determine the one or more result values as one or more output values of the artificial neural network ( loss, Fig 1) It would have been obvious having the teachings of Jeong and Andersen to further include the neural network concept of Yan before effective filing date since supervised learning based speech enhancement approaches have achieved substantial success, and show significant improvements over the conventional approaches ( Abstract, Yan ) Regarding claim 8, Yan as above in claim 7, teaches wherein each of the plurality of input values depends on at least one of the estimated target signal and the estimated residual signal and the audio input signal, and wherein the one or more result values indicate the estimated sound quality of the estimated target signal (STOI, PESQ and SDR [20] are used to evaluate speech intelligibility and sound quality, Evaluation result) Regarding claim 9, Yan as above in claim 7, wherein each of the plurality of input values depends on at least one of the estimated target signal and the estimated residual signal and the audio input signal, and wherein the one or more result values are the one or more parameter values ( evaluation result, backpropagation, Under Experimental setup; Evaluation result ) Regarding claim 10, Jensen as above in claim 7 mentions training however does explicitly teaches the artificial neural network is configured to be trained by receiving a plurality of training sets, wherein each of the plurality of training sets comprises a plurality of input training values of the artificial neural network and one or more output training values of the artificial neural network, wherein each of the plurality of output training values depends on at least one of a training target signal and a training residual signal and a training input signal, wherein each of the or more output training values depends on an estimation of a sound quality of the training target signal However Yan teaches the artificial neural network is configured to be trained by receiving a plurality of training sets, wherein each of the plurality of training sets comprises a plurality of input training values of the artificial neural network and one or more output training values of the artificial neural network (clean and noisy speech, Fig 1) , wherein each of the plurality of output training values depends on at least one of a training target signal and a training residual signal and a training input signal ( we can optimize a modified STOI function f based loss by using backpropagation (BP) algorithm, Under Algorithm Description) , wherein each of the or more output training values depends on an estimation of a sound quality of the training target signal (STOI, PESQ and SDR [20] are used to evaluate speech intelligibility and sound quality, Table 1 and Table 2) Regarding claim 11, Yan as above in claim 10, teaches wherein the estimation of the sound quality of the training target signal depends on one or more computational models of sound quality ( STOI, Under Evaluation Result) Regarding claim 12, Yan as above in claim 11, teaches wherein the one or more computational models of sound quality are at least one of: Blind Source Separation Evaluation, Perceptual Evaluation methods for Audio Source Separation, Perceptual Evaluation of Audio Quality, Perceptual Evaluation of Speech Quality, Virtual Speech Quality Objective Listener Audio, Hearing-Aid Audio Quality Index, Hearing-Aid Speech Quality Index, Hearing-Aid Speech Perception Index, and Short-Time Objective Intelligibility( short-time objective intelligibility measure (STOI), Under Introduction) Regarding claim 13, Yan as above in claim 7, teaches , wherein the artificial neural network is configured to determine the one or more result values depending on the estimated target signal and depending on at least one of the audio input signal and the estimated residual signal (STOI, Fig 1) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Finaur ( US Pub: 20190362736) Para 0045-0047) Defraene ( US Pub: 20190122685) THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Richa Sonifrank whose telephone number is (571)272-5357. The examiner can normally be reached M-T 7AM - 5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Phan Hai can be reached at (571)272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Richa Sonifrank/Primary Examiner, Art Unit 2654
Read full office action

Prosecution Timeline

Jun 21, 2021
Application Filed
Jun 07, 2023
Non-Final Rejection — §103
Nov 13, 2023
Response Filed
Jan 29, 2024
Final Rejection — §103
Jul 01, 2024
Request for Continued Examination
Jul 03, 2024
Response after Non-Final Action
Sep 16, 2024
Non-Final Rejection — §103
Feb 18, 2025
Response Filed
Mar 12, 2025
Final Rejection — §103
Aug 18, 2025
Request for Continued Examination
Aug 25, 2025
Response after Non-Final Action
Sep 17, 2025
Non-Final Rejection — §103
Feb 19, 2026
Response Filed
Mar 23, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602552
Machine-Learning-Based OKR Generation
2y 5m to grant Granted Apr 14, 2026
Patent 12603085
ENTITY LEVEL DATA AUGMENTATION IN CHATBOTS FOR ROBUST NAMED ENTITY RECOGNITION
2y 5m to grant Granted Apr 14, 2026
Patent 12585883
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12585877
GROUPING AND LINKING FACTS FROM TEXT TO REMOVE AMBIGUITY USING KNOWLEDGE GRAPHS
2y 5m to grant Granted Mar 24, 2026
Patent 12579988
METHOD AND APPARATUS FOR CONTROLLING AUDIO FRAME LOSS CONCEALMENT
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
66%
Grant Probability
91%
With Interview (+24.9%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 379 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month