Prosecution Insights
Last updated: April 19, 2026
Application No. 18/096,919

EYEWEAR PROCESSING SIGN LANGUAGE TO ISSUE COMMANDS

Non-Final OA §103§112
Filed
Jan 13, 2023
Examiner
XIAO, DI
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
Snap Inc.
OA Round
3 (Non-Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
463 granted / 600 resolved
+22.2% vs TC avg
Strong +22% interview lift
Without
With
+21.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
24 currently pending
Career history
624
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
57.6%
+17.6% vs TC avg
§102
17.1%
-22.9% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 600 resolved cases

Office Action

§103 §112
DETAILED ACTION In Applicant’s Response (RCE) dated 11/22/2025, Applicant amended claims 1 to 20; and argued against all rejections previously set forth in the Office action dated 8/22/2025. Claims 1-20are pending in this case. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 6/23/2014 has been entered. Response to Argument Applicant’s arguments were considered, but are moot in view of the new ground(s) of rejection. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. With regard to claim 1 applicant claims the limitation of “a camera supported by the frame and configured to capture an image in front of the frame including a hand gesture; generate a command that is indicative of the identified hand gesture, wherein the command is configured to initiate the camera to take an image of an object in front of the frame.” These limitations are not specifically taught by the specification or the original claims. More specifically the specification or the original claims do not specifically teach the limitation of “capture an image in front of the frame”. Claims 10 and 18 are rejected for the same reason. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) Claim(s) 1, 2, 3, 4, 7, 9, 10, 11, 12, 13, 17, 18, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Browy, Pub. No.: US 20180075659 A1, in view of Kothari, Pub. No.: US 20170313248 A1, and further in view of O’Neil, Pub. No.: 20140244652 A1. With regard to claim 1: Browy discloses the aspect of eyewear, comprising: a frame configured to be worn on a head of a user; a camera supported by the frame and configured to capture an image including a hand gesture (see fig. 2A for the eyewear, paragraph 36: “The wearable system described herein can combine sign language recognition (SLR) and display capability of a wearable device to provide a user with information based on a detected sign language. For example, an outward-facing camera on the wearable device can image gestures being made, identify signs among the gestures, translate the signs to a language the user understands, and display the translation to the user. A transcript (e.g., a caption or a text bubble) of the detected sign language can be displayed to the user by the wearable system. A machine learning algorithm (e.g., a deep neural network) can receive the images and perform the identification and translation of the signs. When prompted by the user, the meaning of a word in the transcript or relevant information from an appropriate source can be displayed. The kinds of auxiliary information that the wearable system can provide can be as unlimited as the vast array of available information resources, e.g., on the Internet.”); and a processor configured to: receive the image including the hand gesture from the camera (a wearable system can receive an image of the user's environment, paragraph 38: “As further described herein, a wearable system can receive an image of the user's environment. The image may be acquired by the outward-facing imaging system of a wearable device or a totem associated with the wearable device. The wearable system can determine whether the image comprises one or more letters or characters and convert the one or more letters or characters into text. The wearable system may determine whether the image comprises letters or characters using a variety of techniques, such as, for example, machine learning algorithms or optical character recognition (OCR) algorithms. The wearable system may use object recognizers (e.g., described in FIG. 7) to identify the letters and characters and convert them into text.”); identify the hand gesture as presenting sign language (the wearable system 200 can interpret sign language by, for example, detecting gestures that may constitute sign language, paragraph 139: “The wearable system 200 can implement a sensory eyewear system 970 for facilitating user's interactions with the other people or with the environment. As one example of interacting with other people, the wearable system 200 can interpret sign language by, for example, detecting gestures that may constitute sign language, translating the sign language to another language (e.g., another sign language or a spoken language), and presenting the translated information to a user of a wearable device. As another example, the sensory eyewear system 970 can translate speech into sign language and present the sign language to the user”); and generate a command that is indicative of the identified hand gesture (The wearable system 900 may be configured to track and interpret hand gestures for button presses, for gesturing left or right, stop, grab, hold, etc, paragraph 115: “Hand gesture tracking or recognition may also provide input information. The wearable system 900 may be configured to track and interpret hand gestures for button presses, for gesturing left or right, stop, grab, hold, etc. For example, in one configuration, the user may want to flip through emails or a calendar in a non-gaming environment, or do a “fist bump” with another person or player. The wearable system 900 may be configured to leverage a minimum amount of hand gesture, which may or may not be dynamic. For example, the gestures may be simple static gestures like open hand for stop, thumbs up for ok, thumbs down for not ok; or a hand flip right, or left, or up/down for directional commands. Hand gesture tracking can include tracking gestures made by others in the user's environment, such as others who make the gestures to communicate with sign language (see, e.g., FIG. 13A).”). Browy does not disclose the aspect wherein generate a command that is indicative of the identified hand gesture, wherein the command is configured to initiate the camera to take an image. However Kothari disclose the aspect wherein generate a command that is indicative of the identified hand gesture, wherein the command is configured to initiate the camera to take an image (paragraph 125: “FIG. 28 illustrates that the sun visor system can comprise of a non-touch selfie function, which would allow the users to take selfie photos/videos without even touching the photo/video button on the sun visor device. Using either a single or combination of hand gestures, a user would be able to instruct the camera associated with the sun visor device to take the selfie photo/video after a few seconds (for example: 5 seconds) once instructed. In addition, a hand gesture can also comprise be raising of the hand or hands with a specific number of fingers raised, wherein the number of fingers raised would indicate the number of seconds that the camera associated with sun visor device should wait before taking the selfie photo/video. For example: as shown in the figure, when both hands with a total of six fingers are raised, then the camera associated with sun visor device would wait six seconds before taking the selfie photo/video.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Kothari to Browy so the user would be able to use the eyewear’s camera to capture his or her hand gesture in order to perform the function of capturing an image, saving time and effort. Browy and Kothari do not disclose the aspect wherein the image is captured in front of the frame. However, O’Neil discloses the aspect wherein the image is captured in front of the frame (paragraph 62: “In the embodiment depicted in FIG. 8a, the glasses (800) are equipped with two defining features. In this embodiment, one of the lenses in the frame would simultaneously serve as a display screen (810) built into the lens as to display certain pieces of information for the user to consume. In this embodiment, the display screen (810) has the ability to adjust the transparency of the display between the live view and the digitally-enhanced view. This transparency functionality may not be the same in other embodiments. Display screen (810) would show content as directed by the associated computing functions. In the depicted embodiment, a camera (820) is built onto the glasses (800) as a way to capture images and/or videos that are in front of the frames. A user may choose to take photos of events or even use the camera for other computing functions. One such function would be the use of facial recognition technology.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply O’Neil to Browy and Kothari so the same method can be applied to a traditional glasses design with the frame in front of the user’s face and wherein the system can capture image in front of the frame and the user’s face and use the image to determine the user’s gesture. With regard to claims 2 and 11: Browy and Kothari and O’Neil disclose the aspect of the eyewear of claim 1, wherein the processor is configured to use a convolutional neural network (CNN) to identify the hand gesture (Browy paragraph 152: “The wearable system 200 can convert the captured sign language to text which can be presented to a user or translated into another language. Conversion of sign language to text can be performed using algorithms such as deep learning (which may utilize a deep neural network), hidden Markov model, dynamic programming matching, etc. For example, the deep learning method (a convolutional neural network in some cases) can be trained on images or videos containing known signs (supervised learning) so as to determine features representative of the signs and to build a classification model based on the learned features. Such a trained deep learning method can then be applied by the local processing and data module 260 or the remote processing module and data repository 270, 280 of the wearable system 200 to images of a signer detected by the outward-facing imaging subsystem.”). With regard to claims 3 and 12: Browy and Kothari and O’Neil disclose the aspect of the eyewear of claim 1, wherein the processor is configured to identify the hand gesture by matching the hand gesture in the image to a set of hand gestures (Browy paragraph 36: “The wearable system described herein can combine sign language recognition (SLR) and display capability of a wearable device to provide a user with information based on a detected sign language. For example, an outward-facing camera on the wearable device can image gestures being made, identify signs among the gestures, translate the signs to a language the user understands, and display the translation to the user. A transcript (e.g., a caption or a text bubble) of the detected sign language can be displayed to the user by the wearable system. A machine learning algorithm (e.g., a deep neural network) can receive the images and perform the identification and translation of the signs. When prompted by the user, the meaning of a word in the transcript or relevant information from an appropriate source can be displayed. The kinds of auxiliary information that the wearable system can provide can be as unlimited as the vast array of available information resources, e.g., on the Internet.”). With regard to claims 4 and 13 and 19: Browy and Kothari and O’Neil disclose the aspect of the eyewear of claim 1, wherein the command is configured to initiate a predefined function (Browy paragraph 124: “Based at least partly on the detected gesture, eye pose, head pose, or input through the totem, the wearable system detects a position, orientation, or movement of the totem (or the user's eyes or head or gestures) with respect to a reference frame, at block 1020. The reference frame may be a set of map points based on which the wearable system translates the movement of the totem (or the user) to an action or command. At block 1030, the user's interaction with the totem is mapped. Based on the mapping of the user interaction with respect to the reference frame 1020, the system determines the user input at block 1040.”). With regard to claim 7: Browy and Kothari and O’Neil disclose the aspect of the eyewear of claim 1, wherein the processor is configured to identify a word from a series of hand gestures (Browy paragraph 177: “FIG. 13A shows an example user experience of a sensory eyewear system where the sensory eyewear system can interpret a sign language (e.g., gestured by a signer) for a user of a wearable system. This example shows a signer 1301 who the user of a sensory eyewear system is observing. The user can perceive that the signer 1301 is making a sequence 1300 of hand gestures as shown in the scenes 1305, 1310, and 1315. The hand gesture in the scene 1305 represents the word “how”; the hand gesture in the scene represents the word “are”; and the hand gesture in the scene 1315 represents the word “you”. Thus the sequence 1300 can be interpreted as “How are you”. The sequences 1320 and 1340 show the same gestures as the sequence 1300. The gesture 1305 corresponds to the gestures 1325 and 1345; the gesture 1310 corresponds to the gestures 1330 and 1350; and the gesture 1315 corresponds to the gestures 1335 and 1355. However, the sequences 1300, 1320, and 1340 illustrate different user display experience as further described below.”). With regard to claims 9 and 17: Browy and Kothari and O’Neil disclose the aspect of the eyewear of claim 1, wherein the hand gesture comprises a moving hand gesture (Browy gesture may include dynamic gesture such as a fist bump, hand flip right, or left, or up/down for directional commands, paragraph 115: “Hand gesture tracking or recognition may also provide input information. The wearable system 900 may be configured to track and interpret hand gestures for button presses, for gesturing left or right, stop, grab, hold, etc. For example, in one configuration, the user may want to flip through emails or a calendar in a non-gaming environment, or do a “fist bump” with another person or player. The wearable system 900 may be configured to leverage a minimum amount of hand gesture, which may or may not be dynamic. For example, the gestures may be simple static gestures like open hand for stop, thumbs up for ok, thumbs down for not ok; or a hand flip right, or left, or up/down for directional commands. Hand gesture tracking can include tracking gestures made by others in the user's environment, such as others who make the gestures to communicate with sign language (see, e.g., FIG. 13A).”see also paragraph 172 for swipe gesture). Claim 10 is rejected for the same reason as claim 1. Claim 18 is rejected for the same reason as claim 1. Claim(s) 5 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Browy, in view of Kothari and O’Neil, and further in view of Osborn, Pub. No.: 2023/0072423A1. With regard to claims 5 and 14: Browy and Kothari and O’Neil do not disclose the aspect wherein the predefined function is setting a timer. However Osborn discloses the aspect wherein the predefined function is setting a timer using gesture. (paragraph 1653: “As the user rolls their wrist clockwise, for example, the duration of the timer may increase, and the smart or virtual assistant may provide auditory feedback to the user (i.e. ‘1 minute’, ‘2 minutes’, ‘3 minutes’, etc.). If the user accidentally selects a timer duration greater than intended, the user may roll their wrist counterclockwise while receiving further auditory feedback from the smart or virtual assistant, so that the user may select an intended timer duration. Once the correct duration has been selected, another “pinch tap” gesture may set the timer and the smart or virtual assistant will notify the user after the appropriate amount of time. At any point in this process, a “flick” gesture may enable a user to exit the timer setting module.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Osborn to Browy and Kothari and O’Neil so the user would be able to use the eyewear’s camera to capture his or her gesture in order to perform the function of setting a timer, saving time and effort. Claim 6 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Browy, in view of Kothari and O’Neil, and further in view of Shiplacoff, Pub. No.; 20130346921 A1 With regard to claim 6 and 15: Browy and Kothari and O’Neil do not disclose the eyewear of claim 4, wherein the processor is configured to launch a third-party application responsive to the identified hand gesture. However Shiplacoff discloses the aspect wherein the processor is configured to launch a third-party application responsive to the identified hand gesture. (paragraph 143: “The foregoing examples have been described in the context of transitioning a computing device from a limited access state to a different access state. However, the concept and implementation of a light field of objects generated at a touch-sensitive display of a computing device and with which a user can interact via the display can be employed in any of a number of different contexts of using a computing device. For example, after a computing device has been transitioned to an unlocked state, a light field can be employed in a variety of geometric configurations, e.g. concentric circles or a rectangular grid, to invoke one or more functions of the operating system of or a particular application executed by the computing device. For example, the computing device can cause the touch-sensitive display to generate a partially or completely transparent grid of dots on a portion of the display when in an unlocked state. In one such an example, when a user activates, e.g., a particular location on the display and then swipes across a portion or all of the grid of dots, the computing device can cause the display to increase the opacity of dots in the grid based on the proximity of the dots to the user gesture, e.g., in the manner described above with reference to FIGS. 2 and 8A-8C. Additionally, upon completion of the gesture, the computing device can invoke one or more functions, e.g. launch an operating system or third-party application on the computing device.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Shiplacoff to Browy and Kothari and O’Neil so the user would be able to use the gesture to quick access third party applications saving time and efforts. Claims 8 and 16 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Browy, in view of Kothari and O’Neil, and further in view of Itzhaik, Pub. No.: US 20150084859 A1. With regard to claim 8: Browy and Kothari and O’Neil do not disclose discloses the eyewear of claim 7, wherein the processor is further configured to auto-complete spelling of the word. However Itzhaik wherein the processor is further configured to auto-complete spelling of the word after identifying a word using hand gesture (paragraph 59: “FIG. 4 is a flowchart schematically illustrating a process and method for cross-matching of multiple inputs each input is a gesture of different body parts mainly hand movement and lips movement for lingual signs identification and verification, according to some embodiments of the present invention. In this process the processor receives data indicative of simultaneous inputs from the multiple sensing and/or input devices 31 such as hands and lips movements gestures, where the hand movement gestures are in sign language and performed by the user simultaneously while speaking the words using lips movements in front of at least one camera positioned to capture both the hands movements as well as the lips movements of the user. The received data is then processed and analyzed using gesture recognition algorithms one adapted to decode hand movements gestures and the other to decode lips movements gestures 33-34. The decoding of the hands movements gesture results in identifying a first lingual sign (i.e. a phoneme, a syllable, a symbol, a word etc.) associated with the first input and the decoding of the simultaneous lips movements gesture results in identifying a second lingual sign associated with the second input. The first lingual sign may then be compared to the second lingual sign 35 for verification thereof, following the method described in relation to FIG. 3 for cross-matching the identified signs and optionally for auto-completion of text.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Itzhaik to Browy and Kothari and O’Neil so the user would be able to have their gestured words autocompleted saving time and effort without having to complete the words themselves. With regard to claim 16 and 20: Browy and Kothari and O’Neil disclose identifying a word from a series of hand gestures (Browy paragraph 177: “FIG. 13A shows an example user experience of a sensory eyewear system where the sensory eyewear system can interpret a sign language (e.g., gestured by a signer) for a user of a wearable system. This example shows a signer 1301 who the user of a sensory eyewear system is observing. The user can perceive that the signer 1301 is making a sequence 1300 of hand gestures as shown in the scenes 1305, 1310, and 1315. The hand gesture in the scene 1305 represents the word “how”; the hand gesture in the scene represents the word “are”; and the hand gesture in the scene 1315 represents the word “you”. Thus the sequence 1300 can be interpreted as “How are you”. The sequences 1320 and 1340 show the same gestures as the sequence 1300. The gesture 1305 corresponds to the gestures 1325 and 1345; the gesture 1310 corresponds to the gestures 1330 and 1350; and the gesture 1315 corresponds to the gestures 1335 and 1355. However, the sequences 1300, 1320, and 1340 illustrate different user display experience as further described below.”). Browy and Kothari and O’Neil do not disclose the aspect wherein the processor auto-completes spelling of the word. However Itzhaik discloses the aspect of identifying a word from a series of hand gestures, wherein the processor auto-completes spelling of the word (paragraph 59: “FIG. 4 is a flowchart schematically illustrating a process and method for cross-matching of multiple inputs each input is a gesture of different body parts mainly hand movement and lips movement for lingual signs identification and verification, according to some embodiments of the present invention. In this process the processor receives data indicative of simultaneous inputs from the multiple sensing and/or input devices 31 such as hands and lips movements gestures, where the hand movement gestures are in sign language and performed by the user simultaneously while speaking the words using lips movements in front of at least one camera positioned to capture both the hands movements as well as the lips movements of the user. The received data is then processed and analyzed using gesture recognition algorithms one adapted to decode hand movements gestures and the other to decode lips movements gestures 33-34. The decoding of the hands movements gesture results in identifying a first lingual sign (i.e. a phoneme, a syllable, a symbol, a word etc.) associated with the first input and the decoding of the simultaneous lips movements gesture results in identifying a second lingual sign associated with the second input. The first lingual sign may then be compared to the second lingual sign 35 for verification thereof, following the method described in relation to FIG. 3 for cross-matching the identified signs and optionally for auto-completion of text.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Itzhaik to Browy and Kothari and O’Neil so the user would be able to have their gestured words autocompleted saving time and effort without having to complete the words themselves. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DI XIAO whose telephone number is (571)270-1758. The examiner can normally be reached 9Am-5Pm est M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DI XIAO/Primary Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

Jan 13, 2023
Application Filed
Apr 02, 2025
Non-Final Rejection — §103, §112
Jul 07, 2025
Response Filed
Aug 21, 2025
Final Rejection — §103, §112
Oct 22, 2025
Response after Non-Final Action
Nov 22, 2025
Request for Continued Examination
Dec 05, 2025
Response after Non-Final Action
Jan 02, 2026
Non-Final Rejection — §103, §112
Apr 01, 2026
Applicant Interview (Telephonic)
Apr 02, 2026
Examiner Interview Summary
Apr 05, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599341
AUTONOMOUS, CONSENT DRIVEN AND GENERATIVE DEVICE, SYSTEM AND METHOD THAT PROMOTES USER PRIVACY, SELF-KNOWLEDGE AND WELL-BEING
2y 5m to grant Granted Apr 14, 2026
Patent 12597519
METHODS FOR CHARACTERIZING AND TREATING A CANCER TYPE USING CANCER IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12588967
PRESENTATION OF PATIENT INFORMATION FOR CARDIAC SHUNTING PROCEDURES
2y 5m to grant Granted Mar 31, 2026
Patent 12586456
SYSTEMS AND METHODS FOR PROVIDING SECURITY SYSTEM INFORMATION USING AUGMENTED REALITY EFFECTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579773
DISPLAY APPARATUS AND DISPLAY METHOD
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+21.7%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 600 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month