Prosecution Insights
Last updated: April 19, 2026
Application No. 18/303,650

INTELLIGENT ROBOT

Non-Final OA §103
Filed
Apr 20, 2023
Examiner
JUNG, JAEWOOK
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Korea Institute Of Science And Technology
OA Round
3 (Non-Final)
33%
Grant Probability
At Risk
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
1 granted / 3 resolved
-18.7% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
27 currently pending
Career history
30
Total Applications
across all art units

Statute-Specific Performance

§101
7.9%
-32.1% vs TC avg
§103
53.7%
+13.7% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
23.2%
-16.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 23, 2026 has been entered. Response to Amendment This office action is in response to the amendments filed January 23, 2026. Claims 1 and 4 are amended. Claims 8 and 9 are cancelled. Claims 1-7 and 10-15 are pending and addressed below. Response to Arguments Applicant’s amendments to claim 1 introduces a new rejection under 35 USC 112(b) as an indefinite limitation has been introduced. See the relevant section below. Applicant’s arguments with respect to the rejection of claim 1 under 35 USC 103 are fully considered but are not persuasive. Applicant provides multiple arguments to the prior art rejection of claim 1, where the fifth argument appears to be summarizing the previous arguments. Regarding the first argument, applicant argues that Yim does not disclose inter-layer spaces are preset, optically functional gaps designed to transmit light, where Yim specifically does not attribute any optical role or light-transmission function to such spacing. However, examiner notes that Yim does disclose inter-layer spaces that are preset (Yim, section 2.1, "Small gaps produce higher resolution robots made of a larger number of layers, while larger gaps enable a wider motion range when actuated."), where the gaps are optically functional such that they are able to transmit light as they are divided by non-zero spaces in between the plurality of layers of members. Regarding the second argument, applicant argues that Yim does not disclose controlling motor torque or revolutions for the purpose of regulating inter member gap width by specifically claiming that Yim does not measure or control the elastic deformation by functional behavior. However, examiner directs applicant's attention to the section "5. Advanced applications: audio-animatronics", where Yim discloses that the set of movements, shown in Fig. 14, express emotions and talking. Further in the section, Yim discloses that synchronization of the movements of the human head robot with an audio material, where it is shown in Yim that the control of the gaps is also a result of functional, actuator-based control by strings attached to certain points of the head (see Fig. 13 of exemplary attachment points). Regarding the third argument, examiner clarifies that the previous office action relied on the teachings of Christensen regarding the lighting features. Applicant argues that Yim in view of Christensen does not disclose using a motor-controlled gap width as the mechanism by which the transmitted light is modulated. However, examiner notes that the functional relationship between a gap width and transmitted light amount is inherent, where a lack of a gap would prevent any light from being transmitted. Regarding the fourth argument, applicant argues that the examiner relies on impermissible hindsight reconstruction of the applicant's disclosure. However, examiner directs applicant's attention to each disclosure pertaining to the field of animatronic robots. Yim discloses a system of realistic facial expression configured to exhibit facial expressions and motions corresponding to speech, Strathearn discloses a similar field of animatronics of realistic humanoid robots’ speech synthesis, Christensen discloses a system of robotic eye contact control, and Savage further discloses animatronics of realistic humanoid robots. Specifically, from applicant's arguments, applicant argues none of the references suggest redefining inter-member gaps as optical transmission elements whose width is intentionally controlled to modulate light output. However, examiner states that Christensen discloses eye contact and eye motion such as blinking, partially closing eyelids, etc. (Christensen, [0039]), where examiner states that Christensen discloses a light source positioned in the robotic figure (Christensen, [0014]) and would advantageously contribute to any reference within the field of animatronic face control as Christensen states that the system utilizing light-based control to improve on the deficiencies of previous camera-based systems for animatronic eye contact (Christensen, [0006]). For at least these reasons, examiner maintains that Yim serves as the primary reference for the rejection of claim 1. Further amendments to the claims are addressed in the rejection below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-7 and 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over “Animatronic soft robots by additive folding” (Yim) in view of “Lifelike Animatronic Abraham Lincoln!” (Savage) and US20210122055A1 (Christensen). For clarity, examiner notes that Yim’s original disclosure was published on June 5, 2018 (see the website link attached to form 892 for “Animatronic soft robots by additive folding” of the last action), where the original disclosure contains the same videos as the cited videos “Animatronic Soft Robots by Additive Folding part 1” (2019), “Animatronic Soft Robots by Additive Folding part 4” (2019) and “Animatronic Soft Robots by Additive Folding part 5” (2019). Regarding claim 1, Yim in view of Savage discloses an intelligent robot comprising: a plurality of members stacked with gaps to form a shape of a face; and PNG media_image1.png 184 1166 media_image1.png Greyscale Figure i. 3-D model of a head (0:08) cut up into connected slices (0:10) and folded back into the 3-D model (0:43) See Multimedia Extension 1. Relevant timestamps and images also included. This video demonstrates the concept behind the end-to-end process. The process takes a 3-D model (0:08) and then turn it into a foldable, single-piece pattern that allows for recreation of the original 3-D models (0:10). The additive folding process (0:43) shows that the structure does indeed have a stack of layers with gaps. at least one flexible structure disposed in at least one of the gaps, and In reference to the previous statement on Appendix A: Extension 1, there exist two flexible structures for the human head example 0:38 in the video: the attachment points of the pattern originally seen 0:10 and the strings threaded inside of tiny holes of the layers seen in 0:29. wherein when the instructions are executed by the at least one processor, the instructions cause at least one processor to control a motion of the face by applying a force to a power transmission element connected to at least one of the plurality of members See Fig. 13. The figure shows that attached strings act as the power transmission elements connected to at least one of the plurality of members. control the motion of the face based on a waveform segment of the audio input, and Yim Section 5, Lines 14-16, “We synchronize various movements of the human head robot with an audio material (1-minute length monologue from a movie, “Life of Pi”) as if the robot were talking (see the multimedia Extension 5 in Appendix A).” While Yim discloses that the audio material is of a “1-minute length monologue”, the Multimedia Extension 5 in Appendix A suggests motion after each sound segment resetting the head back to its original default configuration seen in 0:04 of the video. Any motion before or after the audio actually begins is also motion “based on a waveform segment of the audio input” as the movement from a resting face to speaking motion and from speaking motion to resting face is based on the audio input make the motion before each sound segment or the motion after each sound segment While Yim discloses, in Multimedia Extension 5, a control of the motion of the face based on a waveform segment of the audio input, Yim do not disclose explicitly that the motion is made before or after each sound segment, as the video does not include audio to make clear which movements are before and after each sound segment. From a similar field of endeavor, Savage discloses a video that displays an animatronic system by Garner Holt Productions that shows both facial expressions both before and after speech. See 0:49-1:18 of the video by Savage. One of ordinary skill in the art would find it obvious to include the system in Savage to the system of Yim as facial motions without audio are already suggested by Yim in the video and including non-verbal movement improves the realism of the animatronic system of Yim control the motion of the face by controlling a motor to apply a tension force to the power transmission element having an end connected to at least one of the plurality of members and an opposite end connected to the motor, and See Appendix A: Extension 5. While it is possible for the control showed in this video to be performed by a human, Yim discloses both the use of automatic controls in an intent to “include integration of various printable sensors and electric motor-based actuation system for feedback-control” (Yim, conclusion, lines 7-8). Therefore, one of ordinary skill in the art would find it obvious to adapt the system disclosed by Yim with the improvements they highlight in the same disclosure to control the system in a smoother manner than that of a person. control the motion of the face by controlling at least one of torque or a number of revolutions of the motor to widen or narrow the gaps between the plurality of members, See Multimedia Extension 5. In light of the rationale of claim 8, the video clearly shows that as the face replicates the lines of dialogue, in particular, the lips of the head can be shown to move up and down, widening and narrowing the gaps. One of ordinary skill in the art would find it obvious to include a processor to control the torque or revolutions of a motor connected to a power transmission element attached to one of the plurality of members that make up the face. The inclusion of the processor would permit for combining widening and narrowing certain aspects of the face with motors (where in the case of Fig. 13 would be 5 motors) to create a larger range of facial expressions and gestures. wherein a light source is disposed inside of the intelligent robot, While Yim does not disclose a light source disposed inside of the intelligent robot, from a similar field of endeavor of animatronic robots, Christensen discloses an animatronic system comprising a light source and light sensor for eye contact (Christensen, [0014]). Given that both systems pertain to human-like features and motions through animatronics, one of ordinary skill in the art would find it obvious, prior to the applicant’s effective filing date, to combine the light source and sensor system of Christensen to the face robot of Yim in view of Savage as the inventions are compatible as there exist gaps formed by the plurality of folded members from Yim to provide the space required to contain the light source. wherein the plurality of members are spaced apart from each other by a preset distance to define the gaps, the gaps allowing light emitted from the light source to pass through, and See Fig. 2c of Yim, where the caption identifies h as the resolution of the sliced model in the vertical direction and section 2.1, where the section identifies that the resolution of slicing corresponds to the gap distance between layers, which a user can set. wherein an amount of transmitted light from the light source increases when the gaps between the plurality of members are wider and the amount of light transmitted from the light source decreases when the gaps between the plurality of members are narrower. One of ordinary skill in the art would find it obvious that an amount of transmitted light from the light source increases when the gaps between the plurality of members are wider and the amount of light transmitted from the light source decreases when the gaps between the plurality of members are narrower as a change in the size of the gap permits for more light to diffuse through the area of the gap. Furthermore, Yim et al does not explicitly disclose: at least one processor; and a memory configured to store instructions for executing the at least one processor, wherein the intelligent robot includes: wherein when the instructions are executed by the at least one processor, the instructions cause at least one processor to control a motion of the face by applying a force to a power transmission element connected to at least one of the plurality of members based on an audio input. See Multimedia Extensions 4 and 5. The motions displayed in these videos show that the human head robot gestures as an animatronic head would (in 4) and imitates speaking dialogue “according to an audio clip” in the caption of 5. While there is no explicit mention of a processor/memory that drives the robot, from the same disclosure, Yim discloses the use of motorized control in Fig. 2 of the disclosure. Therefore, it would have been obvious for one of ordinary skill in the art prior to applicant’s effective filing date that the device of Yim is using an undiscussed processor to move the face in accordance to instructions to mimic human features. Regarding claim 2, with all of the limitations of claim 1, the intelligent robot further comprises: wherein the plurality of members have a plate-shaped structure, at least some of the members are connected by the flexible structure, and the gaps are present between the plurality of members. See the section regarding the first limitation of claim 1. Specifically, the second image of Figure (i) shows the plate-shaped structure of the plurality of members while the third image shows their folding being connected by a shared connective section between plates and a clear gap between plates. Regarding claim 3, with all of the limitations of claim 1, the intelligent robot further comprises: wherein the face is formed in a hollow 3-dimensional shape to form an accommodation space. See the section regarding the first limitation of claim 1. The gaps formed by the stacking shown in the third image of Figure (i) show that a 3-D human head is formed, where the gaps act as a space that can accommodate objects that fit within. Regarding claim 4, with all of the limitations of claim 3, the intelligent robot further comprises: wherein when the instructions are executed by the at least one processor, the instructions cause the at least one processor to turn on or off light in an upward or downward direction by the light source disposed at an upper portion or a lower portion inside of the intelligent robot In light of the rationale regarding claim 1, one of ordinary skill would find it obvious, prior to the effective filing date, to include a processor and memory components to the system of Yim to automate the expression and speech motion of the system. However, the disclosure does not provide a processor that turn light on or off nor a light source inside of the robot. In a similar field of endeavor, Christensen discloses a system for sensing and controlling eye contact for a robot. From the abstract of their disclosure, “The system includes a light source positioned in the robotic figure to output through the light outlet of the eye.” It is further disclosed that “the present description relates, in general, to design and control of robots, robotic figures, or characters, and/or animatronics that include eyes such as, but not limited to, human-like robotic characters.” (Christensen, [0001]) Given that both disclosures deal with a system that entails human-like features such as a face, it would be obvious to one of ordinary skill in the art to combine the light source and sensor system of the eye from Christensen to the face robot of Yim in view of Savage as the inventions are compatible. The accommodation space introduced in the rationale regarding claim 3 could be adapted to hold the light system, where the space would be any of the gaps formed during formation. Regarding the capability of an on or off light, paragraph 10 of Christensen discloses “Alternatively a structured light source, rather than a single beam source which may allow a face to be “missed” in some cases, may be easily fit within a robotic eye. Such light sources can be operated in a continuous manner or in an on/off sequence known to an image processor to produce an easy to detect beam or structured light emission that will point to exactly where the robotic character is presently looking as the robotic eye aims or targets the output of the light source held within it.” Finally, when combined with Yim in view of Savage, any directional motion of the light would be addressed by the feature from Yim’s disclosure that allows the robot to tilt its head in certain directions (see at least Fig. 13 of Yim). Regarding claim 5, with all of the limitations of claim 1, the intelligent robot further comprises: wherein each of the plurality of members have at least one groove on an outer periphery thereof. See Fig. 13. In particular, strings d and e (designated by white circles) are shown to be attached the outer edge of the head. Each of the members of the plurality that form the hollow 3-D head shape are fitted with holes (see Fig. (i), second image above) that when stacked together form grooves to allow the power transmission element to attach to specific parts of the head for certain motions, where strings a-c are the said transmission elements. In particular, the figure shows 5 points of attachment. Therefore, it would be obvious to one of ordinary skill in the art, prior to the effective filing date, that the string used to actuate the system of Yim contain groove slices that would line up according to the three-dimensional figure sliced into two-dimensional shapes. Regarding claim 6, with all of the limitations of claim 5, the intelligent robot further comprises: wherein the power transmission element is inserted into each of the at least one groove, the power transmission element having an end connected to at least one of the plurality of members and an opposite end connected to a motor installed inside or outside of the intelligent robot. In light of the rationale of claim 1, Fig. 2 shows the use of animating the additive folding model by using strings at specified points of attachment to the model. In particular, (e) shows the pulling of said strings to control the soft robot structure while (f) discloses an example of controlling the string to the motor. Regarding claim 7, with all of the limitations of claim 1, the intelligent robot further comprises: wherein the face of the intelligent robot is formed by stacking the plurality of members in a shape of a human's face including at least one of eyes, a nose or a mouth. See Fig. 15. The figure segments the image of the formed human head and further segments regions to showing eyes, cheek, nose, mouth, and forehead. Regarding claim 13, with all of the limitations of claim 4, the intelligent robot further comprises: wherein the light source is installed in the accommodation space, so that a light emitted from the light source leaks through the gaps between the plurality of members, causing the intelligent robot to glow. In light of the rationale regarding claim 4, it would be obvious to one of ordinary skill in the art, prior to the effective filing date, that the combination of Yim in view of Savage and Christensen would have lights leak from the gaps, and the light would come from inside the robot head where the light source is housed in an accommodation space. When combine with Yim in view of Savage, the resulting robot would emit a glow through the plurality of gaps in the robot. Regarding claim 14, with all of the limitations of claim 4, the intelligent robot further comprises: wherein the light source is installed in the accommodation space and at least one of the plurality of members and flexible structure is produced by a semitransparent material, so that the light emitted from the light source is scattered from inside the intelligent robot to the outside, causing the intelligent robot to glow. Yim Section 3, Lines 1-2, “The main material used for the bunny robot is 100 μm thick polyester film (Dura-lar film, Grafix), which is lightweight, dimensionally stable, highly elastic and resistive to tearing.”. See the Dura-lar film website and section 9 of the attached safety data sheet from the website. Said section provides alternatives of the appearance of the film, where one such appearance is translucent or semitransparent. One of ordinary skill in the art would have found it obvious, prior to the effective filing date, that the invention was made to be translucent for a scattered light source to make the members glow since it has been held to be within the general skill of a worker in the art to select a known material on the basis of its suitability for the intended use as a matter of obvious design choice. In re Leshin, 125 USPQ 416. Regarding claim 15, with all of the limitations of claim 4, the intelligent robot further comprises: wherein the light source is installed in the accommodation space and at least one of the plurality of members and the flexible structure is produced by an opaque material, so that the light emitted from the light source is scattered in at least one of the plurality of members and flexible structure forming the intelligent robot. In light of claim 14, the Dura-Lar film cited by Yim also comes in opaque materials that the robot could be constructed by. This material would scatter the light emitted rather than allowing it to pass through. One of ordinary skill in the art would have found it obvious, prior to the effective filing date, that the flexible structure supporting the invention was made to be opaque since it has been held to be within the general skill of a worker in the art to select a known material on the basis of its suitability for the intended use as a matter of obvious design choice. In re Leshin, 125 USPQ 416. Claims 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over “Animatronic soft robots by additive folding” (Yim) in view of “Lifelike Animatronic Abraham Lincoln!” (Savage) and US20210122055A1 (Christensen). and in further view of “A Novel Speech to Mouth Articulation System for Realistic Humanoid Robots” (Strathearn). Regarding claim 10, with all of the limitations of claim 1, the intelligent robot further comprises: wherein when the instructions are executed by the at least one processor, in case that a peak of the audio input is equal to or more than a preset size, the instructions cause the at least one processor to scale up the motion of the face corresponding to the peak. In light of the rationale from claim 1, a processor-controlled system would be obvious to perform all of the activities shown in Yim’s disclosure. However, Yim do not disclose the scaling of the face’s motion corresponding to a peak of the audio input. While they may not disclose explicitly that the face’s motion is directly proportional to volume, Strathearn, from a similar field of endeavor, disclose from their work that “the higher the amplitude, the wider the mouth opens and the lower the sound input, the less, this is representative of mouth aperture size when talking loud and quiet in humans.” Therefore, it would be obvious for one of ordinary skill in the art, prior to the effective filing date, to include the consideration above from Strathearn to the system of Yim in view of Savage and Christensen for better accomplishing the goal of human-like motion. Regarding claim 11, with all of the limitations of claim 1, the intelligent robot further comprises: wherein when the instructions are executed by the at least one processor, the instructions cause the at least one processor to control the face by a first motion for a first segment before a first syllable identified by splitting the audio input into syllables, and control the face by a second motion for a second segment after a last syllable of the audio input. In light of the rationale of claim 1, Yim discloses the use of audio samples to control the motion of a mouth of a human robot, but do not disclose details on how the motion is planned. While Yim do not disclose a way of planning the animatronic face control by use of syllables, Strathearn discloses in page 3, “Thus, although this methodology is vital in the development of a robotic mouth system as it accounts for jaw positioning to syllable pattering and pitch frequency it requires further modification to include lip articulation for generating vowel/consonant sounds.” In page 9 of their disclosure, they also the jaw position to syllable patterning as one of five listed tests to evaluate their robotic mouth. Therefore, it would be obvious for one of ordinary skill in the art, prior to the effective filing date, to consider jaw position as a function of syllables while planning to automate the motion of the robot described in Yim in view of Savage and Christensen. It would have been further obvious to one of ordinary skill in the art for a second motion to be dependent on a previous syllable as speaking requires a set of mouth poses in sequence to form words and sentences. Regarding claim 12, with all of the limitations of claim 11, the intelligent robot further comprises: wherein the first motion is a first oral motion set based on a human's mouth shape formed to output the first syllable, and wherein the second motion is a second oral motion set based on the human's mouth shape formed to silence after the last syllable. In light of the rationale of claim 11, Strathearn’s disclosure shows the use of jaw articulation to syllable patterning in their humanoid mouth robot. Further in their disclosure in page 9, paragraph 3, “Observational data is collected using an online survey to examine the visual and speech authenticity of the 11 humanoid robots using Likert scales with embedded video samples and deployed to a random sample of 50 anonymous participants ages 18+.” Therefore, it would be obvious to one of ordinary skill in the art, prior to the effective filing date, to program the oral motions of a robot to be based closely on a human’s mouth shape as the work looks to expand on the disclosed animatronic human head. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAEWOOK JUNG whose telephone number is (571)272-5470. The examiner can normally be reached Monday - Friday, 9:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached on (571) 270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.J./Examiner, Art Unit 3656 /WADE MILES/Supervisory Patent Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Apr 20, 2023
Application Filed
May 27, 2025
Non-Final Rejection — §103
Sep 29, 2025
Response Filed
Oct 22, 2025
Final Rejection — §103
Jan 23, 2026
Request for Continued Examination
Feb 08, 2026
Response after Non-Final Action
Feb 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12514149
SYSTEMS AND METHODS FOR SPRAYING SEEDS DISPENSED FROM A HIGH-SPEED PLANTER
2y 5m to grant Granted Jan 06, 2026
Patent 12480561
VEHICLE AND CONTROL METHOD THEREOF
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
33%
Grant Probability
99%
With Interview (+100.0%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month