Prosecution Insights
Last updated: April 18, 2026
Application No. 18/055,395

Multi-Assistant Warm Words

Non-Final OA §103
Filed
Nov 14, 2022
Examiner
CHUNG, DANIEL WONSUK
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
3 (Non-Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
2y 10m
To Grant
92%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
24 granted / 44 resolved
-7.5% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
33 currently pending
Career history
77
Total Applications
across all art units

Statute-Specific Performance

§101
25.2%
-14.8% vs TC avg
§103
52.3%
+12.3% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
5.2%
-34.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 44 resolved cases

Office Action

§103
DETAILED ACTION This communication is in response to the Amendments and Arguments filed on 2/12/2026. Claims 1-11,13-25 and 27-28 are pending and have been examined. All previous objections / rejections not mentioned in this Office Action have been withdrawn by the examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments Regarding the Applicant’s arguments for the rejections under 35 U.S.C. § 102, applicant has amended independent claims 1 and 15 with the limitations in claim 12 “for each warm word in the respective active set of warm words for each respective digital assistant in the group of digital assistants, determining a time when the warm word was most recently detected by the MAD” and “the determined time that each warm word in the respective active set of warm words for each respective digital assistant was most recently detected by the MAD” and has further added the limitation “wherein the warm word arbitration routine assigns a higher priority for inclusion in the final set of warm to warm words that were detected by the MAD more recent in time”. Applicant asserts that the prior art reference D’Amato does not teach or suggest “determining a time when the warm word was most recently detected” and “warm word arbitration routine assigns a higher priority for inclusion in the final set of warm to warm words that were detected by the MAD more recent in time”. Examiner has brought in new reference to reflect the introduction of the new claim limitation. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-11, 13-25, 27-28 are rejected under 35 U.S.C. 103 as being unpatentable over D’Amato et al (U.S. PG Pub No. 20230169956), hereinafter D’Amato, in view of Sharifi (U.S. PG Pub No. 20190115026). Regarding claim 1 and 15 D’Amato teaches: (Claim 1) A computer-implemented method when executed on data processing hardware causes the data processing hardware to perform operations comprising: (P0278, Systems, methods, apparatus.; P0068, Device includes at least one processor.) (Claim 15) A system comprising: data processing hardware; and memory hardware in communication with the data processing hardware, the memory hardware storing instructions that when executed on the data processing hardware cause the data processing hardware to perform operations comprising: (P0068, Device includes at least one processor, which may be a clock-driven computing component configured to process input data according to instructions stored in memory. The memory may be a tangible, non-transitory, computer-readable medium configured to store instructions that are executable by the processor. For example, the memory may be data storage that can be loaded with software code that is executable by the processor to achieve certain functions.) for each respective digital assistant in a group of digital assistants enabled for simultaneous execution on a multi-assistant device (MAD), receiving a respective active set of warm words that each specify a respective action for the respective digital assistant to perform; (P0026, Network microphone devices may be used facilitate voice control of smart home devices.; Fig. 13, Plurality of devices.; P0091, Multiple playback devices may be “bonded” to form a “bonded pair,” which together form a single zone. … The merged playback devices may, aside from playing audio content in synchrony, each play audio content as they would if they were not merged.; P0266, Each of NLU1, NLU2, and NLU3 may store predetermined libraries of keywords that are substantially or completely identical, partially overlapping, or completely non-overlapping. In example implementations, the individual NMDs 103d, 103i, and 103e may synchronize or otherwise update the libraries of their respective local NLUs. For instance, the NMDs 103d, 103i, and 103e may share data representing the libraries of their respective local NLUs NLU1, NLU2, and NLU3, possibly using a network.; P0032, A keyword that invokes a command (referred to herein as a “command keyword”) may be a word or a combination of words (e.g., a phrase) that functions as a command itself, such as a playback command.; P0172, Available keywords to be identified by the command-keyword engine can be limited based on conditions as reflected via the state machines.) obtaining enabled warm word constraints for the MAD, the enabled warm word constraints comprising memory and computing resource availability on the MAD for detection of warm words; (P0210, Supporting such specific audible prompts may be made practicable by supporting a relatively limited number of command keywords (e.g., less than 100), but other implementations may support more command keywords with the trade-off of requiring additional memory and processing capability.) based on the respective active set of warm words associated with each digital assistant in the group of digital assistants, executing, by a multi-assistant interface executing on the MAD, a warm word arbitration routine to enable a final set of warm words for detection by the MAD based on the enabled warm word constraints and the determined time that each warm word in the respective active set of warm words for each respective digital assistant was most recently detected by the MAD, wherein the warm word arbitration routine assigns a higher priority for inclusion in the final set of warm to warm words that were detected by the MAD more recent in time, and wherein, each corresponding warm word in the final set of warm words enabled for detection by the MAD is selected from the respective active set of warm words for at least one digital assistant in the group of digital assistants; and (P0034, Detection of a command keyword can be limited by certain conditions. For example, if there is no content currently being played back, the available intents to be identified by the local NLU can be limited, for example by excluding keywords such as “pause,” “skip,” etc.; P0172, Available keywords to be identified by the command-keyword engine can be limited based on conditions as reflected via the state machines. As one example, the first state and the second state of the state machine may operate as enable/disable toggles to the command-keyword engine.; P0210, Supporting a relatively limited number of command keywords (e.g., less than 100), but other implementations may support more command keywords with the trade-off of requiring additional memory and processing capability.) while the final set of warm words are enabled for detection by the MAD: (P0266, Each of NLU1, NLU2, and NLU3 may store predetermined libraries of keywords that are substantially or completely identical, partially overlapping, or completely non-overlapping. In example implementations, the individual NMDs 103d, 103i, and 103e may synchronize or otherwise update the libraries of their respective local NLUs,; P0172, Available keywords to be identified by the command-keyword engine can be limited based on conditions as reflected via the state machines. As one example, the first state and the second state of the state machine may operate as enable/disable toggles to the command-keyword engine.) receiving audio data corresponding to an utterance captured by the MAD; (P0249, Receiving input sound data representing sound detected by one or more microphones of an NMD.) detecting, in the audio data, a warm word from the final set of warm words; and (P0250, Detecting, via a command-keyword engine (e.g., command-keyword engine of FIG. 7A), a first command keyword in a first voice input represented in the input sound data.) instructing, from the group of digital assistants, the digital assistant associated with the detected warm word to perform the respective action specified by the detected warm word. (P0258, Performing an action may involve transmitting one or more instructions over one or more networks. For instance, the NMD may transmit instructions locally over the network to one or more playback devices to perform instructions such as transport commands (FIG. 10), similar to the message exchange illustrated in FIG. 6. Further, the NMD may transmit requests to the streaming audio service service(s) to stream one or more audio tracks to the target playback device(s) for playback over the links (FIG. 10).) D’Amato does not specifically teach: for each warm word in the respective active set of warm words for each respective digital assistant in the group of digital assistants, determining a time when the warm word was most recently detected by the MAD; based on the respective active set of warm words associated with each digital assistant in the group of digital assistants, executing, by a multi-assistant interface executing on the MAD, a warm word arbitration routine to enable a final set of warm words for detection by the MAD based on the enabled warm word constraints and the determined time that each warm word in the respective active set of warm words for each respective digital assistant was most recently detected by the MAD, wherein the warm word arbitration routine assigns a higher priority for inclusion in the final set of warm to warm words that were detected by the MAD more recent in time, and wherein, each corresponding warm word in the final set of warm words enabled for detection by the MAD is selected from the respective active set of warm words for at least one digital assistant in the group of digital assistants; and Sharifi, however, teaches: for each warm word in the respective active set of warm words for each respective digital assistant in the group of digital assistants, determining a time when the warm word was most recently detected by the MAD; (P0068, The hotword manager may determine if the voice command “OPEN CALENDAR” should be designated as a hotword based at least on one or more of acoustic features stored for the voice command, usage statistics of the voice command, usage statistics of voice commands already designated as hotwords.; P0051, The hotword manager may undesignate the voice command that has been least recently used or least frequently used as a voice command.) based on the respective active set of warm words associated with each digital assistant in the group of digital assistants, executing, by a multi-assistant interface executing on the MAD, a warm word arbitration routine to enable a final set of warm words for detection by the MAD based on the enabled warm word constraints and the determined time that each warm word in the respective active set of warm words for each respective digital assistant was most recently detected by the MAD, wherein the warm word arbitration routine assigns a higher priority for inclusion in the final set of warm to warm words that were detected by the MAD more recent in time, and wherein, each corresponding warm word in the final set of warm words enabled for detection by the MAD is selected from the respective active set of warm words for at least one digital assistant in the group of digital assistants; and (P0068, The hotword manager may determine if the voice command “OPEN CALENDAR” should be designated as a hotword based at least on one or more of acoustic features stored for the voice command, usage statistics of the voice command, usage statistics of voice commands already designated as hotwords.;P0041, As an example, if the user says “OK COMPUTER, CALL MOM,” the hotword manager 150 may determine that the voice command “CALL MOM” is not in a list of hotwords 160. Accordingly, the hotword manager 150 may determine that the voice command “CALL MOM” should be designated as a hotword.; P0051, The hotword manager may undesignate the voice command that has been least recently used or least frequently used as a voice command. In some implementations, in determining whether to designate a voice command as a hotword, the hotword manager 150 may compare use of voice commands already designated as hotwords with the voice command that may be designated as a hotword.) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to assign higher priority to warm words that were detected more recent in time. It would have been obvious to combine the references because designating voice commands as hotwords based on determining that the voice command satisfies one or more predetermined criteria allows the system to better discern when a given utterance is directed at the system. (Sharifi P0004-P0008) Regarding claim 2 and 16 D’Amato in view of Sharifi teach claim 1 and 15. D’Amato further teaches: wherein receiving the respective active set of warm words associated with the digital assistant comprises receiving, for at least one warm word in the respective active set of warm words, via a warm word application programming interface (API) executing on the MAD, a respective warm word model configured to detect the corresponding warm word in streaming audio without performing speech recognition. (P0075, Network interface, may take the form of one or more wireless interfaces and/or one or more wired interfaces. A wireless interface may provide network interface functions for the playback device to wirelessly communicate with other devices (e.g., other playback device(s), NMD(s), and/or controller device(s)) in accordance with a communication protocol.; P0266, In example implementations, the individual NMDs 103d, 103i, and 103e may synchronize or otherwise update the libraries of their respective local NLUs.; P0149, The VAS is configured to process the sound-data stream SDS contained in the messages My sent from the NMD. More specifically, the NMD is configured to identify a voice input based on the sound-data stream SDS. As described in connection with FIG. 2C, the voice input may include a keyword portion and an utterance portion. The keyword portion corresponds to detected sound that caused a wake-word event, or leads to a command-keyword event when one or more certain conditions, such as certain playback conditions, are met.) Regarding claim 3 and 17 D’Amato in view of Sharifi teach claim 1 and 15. D’Amato further teaches: receiving a user command specifying a long-standing operation for the corresponding digital assistant to perform; and performing, via the corresponding digital assistant, the long-standing operation specified by the user command, wherein receiving the respective active set of warm words associated with the digital assistant comprises receiving, in response to the corresponding digital assistant performing the long-standing operation, the respective active set of warm words associated with the corresponding digital assistant. (P0169, The NMD includes the one or more state machine(s) to facilitate determining whether the appropriate conditions are met. The state machine transitions between a first state and a second state based on whether one or more conditions corresponding to the detected command keyword are met. In particular, for a given command keyword corresponding to a particular command requiring one or more particular conditions, the state machine transitions into a first state when one or more particular conditions are satisfied and transitions into a second state when at least one condition of the one or more particular conditions is not satisfied.; P0172, The command-keyword engine may be disabled unless certain conditions have been met via the state machines, and/or the available keywords to be identified by the command-keyword engine can be limited based on conditions as reflected via the state machines. As one example, the first state and the second state of the state machine may operate as enable/disable toggles to the command-keyword engine. In particular, while a state machine corresponding to a particular command keyword is in the first state, the state machine enables the command-keyword engine of the particular command keyword. Conversely, while the state machine corresponding to the particular command keyword is in the second state, the state machine disables the command-keyword engine of the particular command keyword.) Regarding claim 4 and 18 D’Amato in view of Sharifi teach claim 3 and 17 . D’Amato further teaches: wherein each warm word in the respective active set of warm words is associated with a respective action for controlling the long-standing operation performed by the corresponding digital assistant. (P0186, Command keywords may require different conditions. For instance, the conditions for “skip” may be different than the conditions for “play” as “skip” may require that the condition that a media item is being played back and play may require the opposite condition that a media item is not being played back. To facilitate these respective conditions, the NMD may implement respective state machines corresponding to each command keyword.) Regarding claim 5 and 19 D’Amato in view of Sharifi teach claim 1 and 15. D’Amato further teaches: discovering a new digital assistant in the group of digital assistants enabled for simultaneous execution on the MAD, (P0041, The NMD may use a local area network to discover playback devices and/or smart devices connected to the network; P0091, Multiple playback devices may be “bonded” to form a “bonded pair,” which together form a single zone. … The merged playback devices may, aside from playing audio content in synchrony, each play audio content as they would if they were not merged.) wherein the multi-assistant interface executes the warm word arbitration routine in response to discovering the new digital assistant in the group of digital assistants. (P0217, The NMD may populate the library of the local NLU locally within the network.; P0218, The NMD may populate the library by discovering devices connected to the network.; P0266, Each of NLU1, NLU2, and NLU3 may store predetermined libraries of keywords that are substantially or completely identical, partially overlapping, or completely non-overlapping. In example implementations, the individual NMDs 103d, 103i, and 103e may synchronize or otherwise update the libraries of their respective local NLUs.) Regarding claim 6 and 20 D’Amato in view of Sharifi teach claim 1 and 15. D’Amato further teaches: determining that a digital assistant has been removed from the group of digital assistants enabled for simultaneous execution on the MAD, (P0111, The controller device may also communicate playback device control commands, such as volume control and audio playback control, to a playback device via the network interface. As suggested above, changes to configurations of the MPS may also be performed by a user using the controller device. The configuration changes may include adding/removing one or more playback devices to/from a zone, adding/removing one or more zones to/from a zone group, forming a bonded or merged player, separating one or more playback devices from a bonded or merged player, among others.) wherein the multi-assistant interface executes the warm word arbitration routine in response to determining that the digital assistant has been removed from the group of digital assistants. (P0237, Two or more NMDs may synchronize or otherwise update the libraries of their respective local NLU.; [Person of ordinary skill in the art would understand that synchronizing libraries would occur in situations of addition or removal of NMDs.]) Regarding claim 7 and 21 D’Amato in view of Sharifi teach claim 1 and 15. D’Amato further teaches: determining an addition of a warm word or a removal of a warm word in the respective active set of warm words associated with the corresponding digital assistant, wherein the multi-assistant interface executes the warm word arbitration routine in response to determining the addition of the warm word or the removal of the warm word in the respective active set of warm words. (P0172, The command-keyword engine may be disabled unless certain conditions have been met via the state machines, and/or the available keywords to be identified by the command-keyword engine can be limited based on conditions as reflected via the state machines. As one example, the first state and the second state of the state machine may operate as enable/disable toggles to the command-keyword engine. In particular, while a state machine corresponding to a particular command keyword is in the first state, the state machine enables the command-keyword engine of the particular command keyword. Conversely, while the state machine corresponding to the particular command keyword is in the second state, the state machine disables the command-keyword engine of the particular command keyword.; P0186, Command keywords may require different conditions. For instance, the conditions for “skip” may be different than the conditions for “play” as “skip” may require that the condition that a media item is being played back and play may require the opposite condition that a media item is not being played back. To facilitate these respective conditions, the NMD may implement respective state machines corresponding to each command keyword.) Regarding claim 8 and 22 D’Amato in view of Sharifi teach claim 1 and 15. D’Amato further teaches: determining a change in ambient context, wherein the multi-assistant interface executes the warm word arbitration routine in response to determining the change in ambient context. (P0217, The NMD may maintain or have access to state variables indicating the respective states of devices connected to the network (e.g., the playback devices). These state variables may include names of the various devices. For instance, the kitchen may include the playback device, which are assigned the zone name “Kitchen.” The NMD may read these names from the state variables and include them in the library of the local NLU by training the local NLU to recognize them as keywords. The keyword entry for a given name may then be associated with the corresponding device in an associated parameter (e.g., by an identifier of the device, such as a MAC address or IP address). The NMD can then use the parameters to customize control commands and direct the commands to a particular device.) Regarding claim 9 and 23 D’Amato in view of Sharifi teach claim 1 and 15. D’Amato further teaches: the enabled warm word constraints further comprise at least one of: computational requirements for enabling each warm word in respective active set of warm words associated with each respective digital assistant in the group of digital assistants; an acceptable false accept rate tolerance; or an acceptable false reject rate tolerance. (P0034, Ddetection of a command keyword can be limited by certain conditions. For example, if there is no content currently being played back, the available intents to be identified by the local NLU can be limited, for example by excluding keywords such as “pause,” “skip,” etc. Accordingly, while the media playback system is certain states, the range of potential keywords to be identified by the NLU can be limited to decrease the rate of false positives.) Regarding claim 10 and 24 D’Amato in view of Sharifi teach claim 1 and 15. D’Amato further teaches: identifying any shared warm words corresponding to warm words present in at least two of the active sets of warm words; and (P0045, The different libraries supported by different NMDs can include partitions. For example, the library of a first NMD can include a first partition of shared keywords and a second partition of dedicated keywords. The shared keywords may be separately stored on the libraries of other NMDs within the same system, while the dedicated keywords may be stored only on that library, or in some instances only on a subset of all the libraries in the system.) when enabling the final set of warm words is further based on assigning a higher priority to warm words identified as shared warm words. (P0045, In some embodiments, the shared keywords can include keywords used most often (e.g., common transport commands such as “pause,” “play,” etc.). By storing the most commonly used commands in libraries of each NMD, the system may more consistently and responsively detect these keywords and perform associated operations.) Regarding claim 11 and 25 D’Amato in view of Sharifi teach claim 1 and 15. D’Amato further teaches: for each warm word in the respective active set of warm words for each respective digital assistant in the group of digital assistants, determining a frequency of detection of the warm word by the MAD; when enabling the final set of warm words is further based on the determined frequency of detection of each warm word in the respective active set of warm words for each respective digital assistant in the group of digital assistants. (P0271, The shared partition x can include keywords associated with the most frequently used commands, while the dedicated partition x, y, and z can each store keywords associated with less frequently commands.) Regarding claim 13 and 27 D’Amato in view of Sharifi teach claim 1 and 15. D’Amato further teaches: receiving a voice command that commands the MAD to enable a first digital assistant and a second digital assistant to execute simultaneously on the MAD, the voice command spoken by a user of the MAD and captured by the MAD in streaming audio; and (P0153, After processing the voice input, the VAS may send a response to the MPS with an instruction to perform one or more actions based on an intent it determined from the voice input. For example, based on the voice input, the VAS may direct the MPS to initiate playback on one or more of the playback devices, control one or more of these playback devices (e.g., raise/lower volume, group/ungroup devices, etc.), or turn on/off certain smart devices, among other actions.) after receiving the voice command, enabling the first digital assistant and the second digital assistant to execute simultaneously with one another on the MAD, wherein the group of digital assistants comprises the first digital assistant and the second digital assistant. (P0116, Once grouped, playback devices in the zones that have been grouped with the particular zone will be configured to play audio content in synchrony with the playback device(s) in the particular zone.) Regarding claim 14 and 28 D’Amato in view of Sharifi teach claim 1 and 15. D’Amato further teaches: receiving, from a software application executing on the MAD or another device in communication with the MAD, a multi-assistant configuration request to enable a first digital assistant and a second digital assistant to execute simultaneously on the MAD; and (P0115, The graphical representations of playback zones may be selectable to bring up additional selectable icons to manage or configure the playback zones in the MPS, such as a creation of bonded zones, creation of zone groups, separation of zone groups, and renaming of zone groups, among other possibilities.) after receiving the multi-assistant configuration request, enabling the first digital assistant and the second digital assistant to execute simultaneously with one another on the MAD, wherein the group of digital assistants comprises the first digital assistant and the second digital assistant. (P0116, Once grouped, playback devices in the zones that have been grouped with the particular zone will be configured to play audio content in synchrony with the playback device(s) in the particular zone.) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL WONSUK CHUNG whose telephone number is (571)272-1345. The examiner can normally be reached Monday - Friday (7am-4pm)[PT]. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PIERRE-LOUIS DESIR can be reached at (571)272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL W CHUNG/Examiner, Art Unit 2659 /PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Nov 14, 2022
Application Filed
Jul 11, 2025
Non-Final Rejection — §103
Oct 13, 2025
Response Filed
Nov 05, 2025
Final Rejection — §103
Feb 12, 2026
Request for Continued Examination
Feb 23, 2026
Response after Non-Final Action
Apr 04, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579471
DATA AUGMENTATION AND BATCH BALANCING METHODS TO ENHANCE NEGATION AND FAIRNESS
2y 5m to grant Granted Mar 17, 2026
Patent 12493892
METHOD AND SYSTEM FOR EXTRACTING CONTEXTUAL PRODUCT FEATURE MODEL FROM REQUIREMENTS SPECIFICATION DOCUMENTS
2y 5m to grant Granted Dec 09, 2025
Patent 12400078
INTERPRETABLE EMBEDDINGS
2y 5m to grant Granted Aug 26, 2025
Patent 12387000
PRIVACY-PRESERVING AVATAR VOICE TRANSMISSION
2y 5m to grant Granted Aug 12, 2025
Patent 12380875
SPEECH SYNTHESIS WITH FOREIGN FRAGMENTS
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
92%
With Interview (+37.5%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 44 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month