Prosecution Insights
Last updated: April 19, 2026
Application No. 18/393,446

INFORMATION DISPLAY METHOD, APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM

Final Rejection §103§112
Filed
Dec 21, 2023
Examiner
SHANG, ANNAN Q
Art Unit
2424
Tech Center
2400 — Computer Networks
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
6 (Final)
71%
Grant Probability
Favorable
7-8
OA Rounds
3y 7m
To Grant
82%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
581 granted / 821 resolved
+12.8% vs TC avg
Moderate +11% lift
Without
With
+10.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
40 currently pending
Career history
861
Total Applications
across all art units

Statute-Specific Performance

§101
3.5%
-36.5% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
27.4%
-12.6% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 821 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 2. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 3. Claims 1-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In particular, Claim 1 line 18, recites “recommend video, without relying on video images…”; It appears Applicant’s disclosure does not have support for the amended claim limitations, i.e., “…without relying on video images” (all negative limitation much be fully supported in Applicant’s disclosure) Independent claims 6 and 11 ae rejected on the same ground(s). Response to Arguments 4. Applicant’s arguments with respect to claim(s) 1-18 have been considered but are moot in view of a new ground(s) of rejection. The amendments to the claims necessitated the new ground(s) of rejection discussed below With respect to the last office action, Applicant amends claims, discusses the claims limitations, the office action and further argues that the prior arts of record (PARs) do not teach the amended claims limitations (see Applicant’s Remarks) In response, Examiner notes Applicant’s arguments/amendments, however the amendments do not over the PARs for these reasons: Please note the 112 rejection above; The primary PAR (PPAR or YANG), discloses (please note the highlighted text), the user can interact via voice or audio “say something” with the displayed or recommended audio text and further reach additional information or results of the query or search via the client/server, which meets the amended claims limitations; Furthermore, PPAR, acquired according to audio data corresponding to voice broadcast (wireless communication) in the recommended video and comprises identification information and association information of a target audio clip in the recommended video, and the target audio clip corresponds to a text sentence or a text sentence segment in the live stream viewer room (items or objects: electronic skipping rope, brown bear, baseball cap, etc. fig.5, [0069-0075] and [0089-0104]) containing a preset keyword in speech recognition text of the voice broadcast in the recommended video, wherein the preset keyword is related to at least on of preferential information or description information of the target object the ID information is configured to identify the target audio clip and association information, comprises summary information or key information of content broadcast in the target audio clip (see Server or Client device; figs.1-12, Abstract, [0003-0015], [0032-0044], [0048-0082], [0087-0104], [0111-0125] and [0143-0158]),Viewer room or Client sends interacts to Server for a recommended video in a live streaming and receives target voice/audio data and target videos (frame(s) or Index): keywords, other data or other information associated with target objects or items within the live stream and other recommended videos of the target objects or items within the live stream; Server further receives ID of user, item or object tag, name, keyword(s) and other information; performs recognition rule or operations, feature classification and other indexing, during viewer room interactions, with trained recognition model(s) and classifier, interaction systems which includes speech, natural language, DL and other AI systems; with the acquire target voice/audio data and target videos (frame(s) or Index): matching and filtering to generate keywords, other data or other information associated with target objects or items within the live stream and other recommended videos of the target objects or items within the live stream; a user input as to item(s) or object(s) of interest, , generates associated audio clip corresponding to the input of interest and further inputs, prompts the user to “say something”; which further updates the audio clip accordingly; Playing the recommended video (Viewer Room or Client), and in response to the recommended video being played to a target video clip corresponding to the target audio clip, displaying the association information in a first display area of a video playback interface and displaying a target text sentence corresponding to the target audio clip in the first display area of the video playback interface, wherein the target text sentence is obtained by the server performing speech recognition on the target audio clip; processes speech and other audio data, performs speech recognition on the audio data, to obtain speech recognition text, and acquires features, determines corresponding time information of each word in the audio recognition text in the audio data based other features; and determines the audio clip information of the target audio clip in the recommended video according to the speech recognition text and the time information (see [0032-0044], [0048-0082], [0087-0104], [0111-0125] and [0143-0158]), during viewer room interactions, Client device plays the requested content: target voice/audio data and target videos (frame(s) or Index) with a specific region superimposed on the live streaming and further displayed based on the display information: generates keywords, other data or other information associated with target objects or items within the live stream and other recommended videos of the target objects or items within the live stream; matching and filtering to generate keywords, other data or other information associated with target objects or items within the live stream and other recommended videos of the target objects or items within the live stream; YANG discloses wherein the audio clip information comprises identification information and association information of a target audio clip in the recommended video, and the target audio clip includes a preset keyword the ID information is configured to identify the target audio clip and association information is related to content live (broadcast) in the target audio clip ([0069-0076], [0095-0097], [0132-0143] and [0184-0196]); BUT appears silent as to where the information associated with an audio clip is information related to content broadcast in the target audio clip; However, in the same field of endeavor, ARCHIBONG discloses capturing and sharing audio, video, etc., and further discloses where the information associated with an audio clip is information related to content broadcast in the target audio clip (see figs.1-36, Abstract, [0062-0063], [0079-0083], [0113-0122], [0153-0164] and [0194-0202]), capturing and sharing broadcast objects, audio, video, song, closed captioning and further sharing notifications, overlays, etc., as discussed below. The amendments to the claims necessitated the new ground(s) of rejection. Note other 103 rejection discussed below. This office action is non-final. Claim Rejections - 35 USC § 103 5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 6. Claim(s) 6-10, 16 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over YANG et al (2022/0239988) in view of ARCHIBONG et al (2014/0068692). As to claim 6, YANG discloses display method and apparatus for item information, device and further discloses an information display method, comprising: Sending an information acquisition request for a recommended video for recommendation a target object to a server, and receiving audio clip information returned by the server based on the information acquisition request, wherein the audio clip information comprises identification information and association information of a target audio clip in the recommended video, and the target audio clip corresponds to a text sentence or a text sentence segment in the live stream viewer room (items or objects: electronic skipping rope, brown bear, baseball cap, etc. fig.5, [0069-0075] and [0089-0104]) containing a preset keyword in speech recognition text of the recommended video, wherein the ID information is configured to identify the target audio clip and association information, comprises summary information or key information of content broadcast in the target audio clip and where the audio clip information and the corresponding target audio clip are acquired based solely on processing of the audio data of the recommended video, without relying on video images of the recommended video (Server or Client device; figs.1-12, Abstract, [0003-0015], [0032-0044], [0048-0082], [0087-0104], [0111-0125] and [0143-0158]),Viewer room or Client sends interacts to Server for a recommended video in a live streaming and receives target voice/audio data and target videos (frame(s) or Index): keywords, other data or other information associated with target objects or items within the live stream and other recommended videos of the target objects or items within the live stream; Server further receives ID of user, item or object tag, name, keyword(s) and other information; performs recognition rule or operations, feature classification and other indexing, during viewer room interactions, with trained recognition model(s) and classifier, interaction systems which includes speech, natural language, DL and other AI systems; with the acquire target voice/audio data and target videos (frame(s) or Index): matching and filtering to generate keywords, other data or other information associated with target objects or items within the live stream and other recommended videos of the target objects or items within the live stream; a user input as to item(s) or object(s) of interest, , generates associated audio clip corresponding to the input of interest and further inputs, prompts the user to “say something”; which further updates the audio clip accordingly; the user further interacts via voice or audio “say something” with the displayed or recommended audio text and further reach additional information or results of the query or search via the client/server Playing the recommended video (Viewer Room or Client), and in response to the recommended video being played to a target video clip corresponding to the target audio clip, displaying the association information in a first display area of a video playback interface and displaying a target text sentence corresponding to the target audio clip in the first display area of the video playback interface, wherein the target text sentence is obtained by the server performing speech recognition on the target audio clip; processes speech and other audio data, performs speech recognition on the audio data, to obtain speech recognition text, and acquires features, determines corresponding time information of each word in the audio recognition text in the audio data based other features; and determines the audio clip information of the target audio clip in the recommended video according to the speech recognition text and the time information ([0032-0044], [0048-0082], [0087-0104], [0111-0125] and [0143-0158]), during viewer room interactions, Client device plays the requested content: target voice/audio data and target videos (frame(s) or Index) with a specific region superimposed on the live streaming and further displayed based on the display information: generates keywords, other data or other information associated with target objects or items within the live stream and other recommended videos of the target objects or items within the live stream; matching and filtering to generate keywords, other data or other information associated with target objects or items within the live stream and other recommended videos of the target objects or items within the live stream YANG discloses wherein the audio clip information comprises identification information and association information of a target audio clip in the recommended video, and the target audio clip includes a preset keyword the ID information is configured to identify the target audio clip and association information is related to content live (broadcast) in the target audio clip ([0069-0076], [0095-0097], [0132-0143] and [0184-0196]); BUT appears silent as to where the information associated with an audio clip is information related to content broadcast in the target audio clip However, in the same field of endeavor, ARCHIBONG discloses capturing and sharing audio, video, etc., and further discloses where the information associated with an audio clip is information related to content broadcast in the target audio clip (figs.1-36, Abstract, [0062-0063], [0079-0083], [0113-0122], [0153-0164] and [0194-0202]), capturing and sharing broadcast objects, audio, video, song, closed captioning and further sharing notifications, overlays, etc. Hence, it would have been obvious before the effective date of the claimed invention to one of ordinary skill in the art to incorporate the teaching of ARCHIBONG into the system of YANG to further process audio or speech and enhanced the display information with additional data associated with target broadcast items or objects. As to claims 8-9, YANG further discloses in response to the playback of the target video clip being completed, moving the association information from the first display area to a second display area of the video playback interface, and during the movement, displaying the association information in a gradually shrinking manner, wherein the second display area is a brief information display area of the target object or a detailed information display area of the target object and after moving the association information from the first display area to the second display area of the video playback interface, further comprising: in a case that the second display area is the brief information display area, the association information is displayed in the second display area; in a case that the second display area is the detailed information display area, the detailed information corresponding to the association information is displayed in the second display area, and displaying of the association information is cancelled ([0048-0082], [0087-0104], [0111-0125], [0143-0158] and [0166-0190]), Server obtains recommended video and deletes tag display information, recommended video and generate text maybe displayed on various regions of the live streaming superimposed, while displaying the item or video recommendation, attributes information is retrieved and displayed according in a region or position which can be move around; can further transition to a small window based on the interactions; various display manner, including first, second, etc. transparency As to claim 10, YANG further discloses in response to the recommended video being played to a first time node, the brief information of the target object is displayed in the brief information display area of the video playback interface; and in response to the recommended video being played to a second time node, the detailed information of the target object is displayed in the detailed information display area of the video playback interface, and the displaying of the brief information is cancelled ([0048-0082], [0087-0104], [0111-0125], [0143-0158] and [0166-0190]), note remarks in claims 8-9. As to claim 16, the claimed “An electronic device, comprising…” is composed of the same structural elements that were discussed with respect to claims 6-7. As to claim 18, the claimed “A non-transitory computer-readable storage medium…” is composed of the same structural elements that were discussed with respect to claims 6-7. 7. Claim(s) 1-5, 11-15 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over YANG et al (2022/0239988) in view of STEELBERG et al (2020/0075019) and further in view of ARCHIBONG et al (2014/0068692) As to claims 1-2, YANG discloses display method and apparatus for item information, device and further discloses an information display method, comprising: Acquiring audio data (wireless communication) of a recommended video of a target object; In response to a speech recognition text of the voice broadcast in the recommended video containing a preset keyword, acquiring a target audio clip in the recommended video, wherein the target audio clip corresponds to a text sentence or a text sentence segment in the live stream viewer room (items or objects: electronic skipping rope, brown bear, baseball cap, etc., (see fig.5, [0069-0075] and [0089-0104]) where the preset keyword is located (Server or Client device); ([figs.1-12, Abstract, [0003-0015], [0032-0044], [0048-0082], [0068-0076], [0087-0104], [0111-0125] and [0131-0158]), Server or Client device receives live stream media that includes target voice/audio data and target videos (frame(s) or Index): includes preset keywords, other data or other information associated with target objects or items (a text sentence segment) within the live stream and other recommended videos of the target objects or items within the live stream; in response to a user input as to item(s) or object(s) of interest, , generates associated audio clip corresponding to the input of interest and further inputs, prompts the user to “say something”; which further updates the audio clip accordingly; Acquiring (Server or Client device) audio clip information of a target audio clip in the recommended video according to the audio data, wherein the audio clip information comprises identification information comprises summary information or key information and association information (summary information or key information: ID of user, item or object tag, name, keyword(s) and other information) and the target audio clip includes a preset keyword ([0032-0044], [0048-0082], [0087-0104], [0111-0125] and [0143-0158]), performs recognition rule or operations, feature classification and other indexing, during viewer room interactions, with trained recognition model(s), interaction systems which includes speech, natural language, DL and other AI systems; ID of user, item or object tag, name, keyword(s) and other information; and in response to receiving an information acquisition request for the recommended video, sending the audio clip information to a client terminal, the client terminal which displays the association information in response to a target video clip corresponding to the target audio clip is played on the client terminal, wherein the information acquisition request is sent by the client terminal; and where the audio clip information and the corresponding target audio clip are acquired based solely on processing of the audio data of the recommended video, without relying on video images of the recommended video ([0032-0044], [0048-0082], [0087-0104], [0111-0125] and [0143-0158]), Server or Client device performs recognition rule or operations, feature classification and other indexing, during viewer room interactions, with trained recognition model(s) and classifier, interaction systems which includes speech, natural language, DL and other AI systems; with the acquire target voice/audio data and target videos (frame(s) or Index): matching and filtering to generate keywords, other data or other information associated with target objects or items within the live stream and other recommended videos of the target objects or items within the live stream; user further interacts via voice or audio “say something” with the displayed or recommended audio text and further reach additional information or results of the query or search via the client/server YANG, processes speech and other audio data, performs speech recognition on the audio data, to obtain speech recognition text, and acquires features, determines corresponding time information of each word in the audio recognition text in the audio data based other features; and determines the audio clip information of the target audio clip in the recommended video according to the speech recognition text and the time information ([0048-0082], [0087-0104], [0111-0125] and [0143-0158]); BUT appears silent as to where the features include Mel-scale Frequency Cepstral Coefficients feature vector of the audio data; determining corresponding time information of each word in the audio recognition text in the audio data based on the Mel-scale Frequency Cepstral Coefficients feature vector; and determining the audio clip information of the target audio clip in the recommended video according to the speech recognition text and the time information. However, in the same field of endeavor, STEELBERG discloses system and method for neural-network orchestration and further discloses using Mel-scale Frequency Cepstral Coefficients feature vector of the audio data; determining corresponding time information of each word in the audio recognition text in the audio data based on the Mel-scale Frequency Cepstral Coefficients feature vector; and determining the audio clip information of the target audio clip in the recommended video according to the speech recognition text and the time information (figs.1-13, Abstract, [0031-0034], [0044-0054], [0060-0072] and [0094-0099]), includes segmentation of audio or speech data. Hence, it would have been obvious before the effective date of the claimed invention to one of ordinary skill in the art to incorporate the teaching of STEELBERG into the system of YANG to further process audio or speech data using other processing systems for additional features of the audio or speech data. YANG as modified by STEELBERG, further discloses wherein the audio clip information comprises identification information and association information of a target audio clip in the recommended video, and the target audio clip includes a preset keyword the ID information is configured to identify the target audio clip and association information is related to content live (broadcast) in the target audio clip ([0069-0076], [0095-0097], [0132-0143] and [0184-0196]-YANG); BUT appears silent as to where the information associated with an audio clip is information related to content broadcast in the target audio clip However, in the same field of endeavor, ARCHIBONG discloses capturing and sharing audio, video, etc., and further discloses where the information associated with an audio clip is information related to content broadcast in the target audio clip (figs.1-36, Abstract, [0062-0063], [0079-0083], [0113-0122], [0153-0164] and [0194-0202]), capturing and sharing broadcast objects, audio, video, song, closed captioning and further sharing notifications, overlays, etc. Hence, it would have been obvious before the effective date of the claimed invention to one of ordinary skill in the art to incorporate the teaching of ARCHIBONG into the system of YANG as modified by STEELBERG to further process audio or speech data and enhanced the display information with additional data associated with target broadcast items or objects. As to claims 3, YANG further discloses segmenting the speech recognition text based on the time information to obtain at least one text sentence; identifying text sentences containing a preset keyword in the at least one text sentence as candidate text sentences; filtering the candidate text sentences based on a preset filtering rule to obtain a target text sentence; and using an audio clip corresponding to the target text sentence in the audio data as a target audio clip in the recommended video, and determining audio clip information of the target audio clip ([0032-0044], [0048-0082], [0087-0104], [0111-0125] and [0143-0158]), note remarks in claims 1-2. As to claims 4-5, YANG further discloses using time node information corresponding to the target audio clip in the recommended video as identification information of the target audio clip, and using the target text sentence as association information of the target audio clip and in response to detecting that the content of the recommended video changes, re-determining the audio clip information of the target audio clip in the recommended video according to the changed audio data of the recommended video ([0032-0044], [0048-0082], [0087-0104], [0111-0125] and [0143-0158]), live streaming includes product link and other resource information; note remarks in claims 1-2. As to claims 11-12, the claimed “An electronic device, comprising…” is composed of the same structural elements that were discussed with respect to claims 1-2. Claim 13 is met as previously discussed in claim 3. Claims 14-15 are met as previously discussed in claims 4-5. As to claims 11-12, the claimed “An electronic device, comprising…” is composed of the same structural elements that were discussed with respect to claims 1-2. As to claim 17, the claimed “An electronic device, comprising…” is composed of the same structural elements that were discussed with respect to claims 1-2 Conclusion 8. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANNAN Q SHANG whose telephone number is (571)272-7355. The examiner can normally be reached Monday-Friday 7-4. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BRUCKART BENJAMIN can be reached on 571-272-3982. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANNAN Q SHANG/ Primary Examiner, Art Unit 2424 ANNAN Q. SHANG
Read full office action

Prosecution Timeline

Dec 21, 2023
Application Filed
May 01, 2024
Non-Final Rejection — §103, §112
Aug 07, 2024
Response Filed
Sep 06, 2024
Final Rejection — §103, §112
Nov 12, 2024
Response after Non-Final Action
Nov 29, 2024
Response after Non-Final Action
Nov 29, 2024
Examiner Interview (Telephonic)
Dec 09, 2024
Request for Continued Examination
Dec 15, 2024
Response after Non-Final Action
Dec 28, 2024
Non-Final Rejection — §103, §112
Apr 03, 2025
Response Filed
May 31, 2025
Final Rejection — §103, §112
Aug 04, 2025
Request for Continued Examination
Aug 06, 2025
Response after Non-Final Action
Aug 09, 2025
Non-Final Rejection — §103, §112
Nov 13, 2025
Response Filed
Mar 09, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587702
TERMINAL APPARATUS, DELIVERY SYSTEM, AND DELIVERY METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12587711
SYSTEM AND METHOD FOR CONFIGURING A CONTENT SELECTION INTERFACE
2y 5m to grant Granted Mar 24, 2026
Patent 12579450
Methods, Systems, And Apparatuses For Model Selection And Content Recommendations
2y 5m to grant Granted Mar 17, 2026
Patent 12556784
SYSTEM AND METHODS FOR OBTAINING AUTHORIZED SHORT VIDEO CLIPS FROM STREAMING MEDIA
2y 5m to grant Granted Feb 17, 2026
Patent 12549814
DYNAMIC SYNCING OF AGGREGATED MEDIA FROM STREAMING SERVICES
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
71%
Grant Probability
82%
With Interview (+10.7%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 821 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month