Prosecution Insights
Last updated: April 19, 2026
Application No. 18/852,940

ENHANCING IMAGES FROM A MOBILE DEVICE TO GIVE A PROFESSIONAL CAMERA EFFECT

Non-Final OA §103§112
Filed
Sep 30, 2024
Examiner
TREHAN, AKSHAY
Art Unit
2639
Tech Center
2600 — Communications
Assignee
Deepmind Technologies Limited
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
95%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
399 granted / 560 resolved
+9.3% vs TC avg
Strong +24% interview lift
Without
With
+23.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
14 currently pending
Career history
574
Total Applications
across all art units

Statute-Specific Performance

§101
2.5%
-37.5% vs TC avg
§103
63.9%
+23.9% vs TC avg
§102
18.3%
-21.7% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 560 resolved cases

Office Action

§103 §112
N O N - F I N A L A C T I O N Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/06/24 complies with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Examiner notes that the current title “ENHANCING IMAGES FROM A MOBILE DEVICE TO GIVE A PROFESSIONAL CAMERA EFFECT” is a good starting point but recommends adding analogous language to “USING A TRAINED IMAGE ENHANCEMENT NEURAL NETWORK” into the current title. For example, Examiner recommends the following amended new title: “ENHANCING IMAGES FROM A MOBILE DEVICE USING A TRAINED IMAGE ENHANCEMENT NEURAL NETWORK TO GIVE A PROFESSIONAL CAMERA EFFECT” Claim Rejections – 35 USC § 112(b) – Definiteness Requirement The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-30 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 (lines 2-6) recites the underlined limitation (with emphasis in bold): “capturing an image with a camera of a mobile device; obtaining, from a user interface of the mobile device, user input data defining a set of one or more specified characteristics of a digital camera, wherein the set of one or more specified characteristics defines one or more characteristics of an exposure triangle of settings comprising an aperture setting, a shutter speed setting, and an ISO setting of the digital camera”. The method steps of “capturing” and “obtaining” are respectively performed by “a camera” and “a digital camera”. It is not clear if the said “a digital camera” is different from or the same as the previously defined “a camera of a mobile device”. Correction is required. Claim 1 (lines 9-12) recites the underlined limitation (with emphasis in bold): “processing the image captured with the camera of the mobile device using a trained image enhancement neural network whilst conditioned on the conditioning tensor to generate an enhanced image having the appearance of an image captured by the digital camera with the specified characteristics;”. The method step of “processing” is considered to define a result desired to be achieved because the said “trained image enhancement neural network” is interpreted to be an undefined mathematical method, wherein it is not clear how this processing step takes into account the captured image and the conditioning tensor. Claim 1 has not defined any details of the training method that is used to show the causal link between the captured image and the conditioning tensor. Correction is required. Independent claim 16 also recites the same deficiencies as those discussed above for claim 1, and thus, does not meet the definiteness requirement under 35 U.S.C. 112(b) for the same reasons. Claims 2-15 and 17 are also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph for reasons discussed above and for respectively depending from rejected claims 1 or 16. Independent claim 18 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph for reasons to be discussed below. Claim 18 (lines 1-6) recites the underlined limitation (with emphasis in bold): “A computer-implemented method of training an image enhancement neural network to provide photographic images, wherein the image enhancement neural network has a plurality of image enhancement neural network parameters and is configured to receive an input image from a mobile device and to process the input image dependent on an image enhancement conditioning input to generate an enhanced image that gives the appearance of an image captured by a digital camera with characteristics defined by the image enhancement conditioning input”. Claim 18 appears to differ and/or is not consistent with claim 1. Claim 18 does not make clear that the image enhancement conditioning input is "user input data defining a set of one or more specified characteristics of a digital camera, wherein the set of one or more specified characteristics defines one or more characteristics of an exposure triangle of settings comprising an aperture setting, a shutter speed setting, and an ISO setting of the digital camera" as was specified in claim 1. Claim 18 (lines 20-21) recites the underlined limitation (with emphasis in bold): “training the image enhancement neural network using one of the first and second training examples to generate a first enhanced image”; It is not clear what the characteristics of an output enhanced image are, and how it differs from an input training example, and thus, how the training could be performed and what the purpose of the training is. Claim 18 (lines 21-22) recites the underlined limitation (with emphasis in bold): “whilst conditioned on camera-characterizing metadata for generating the first enhanced image”; It is not clear what is meant by "whilst conditioned on camera-characterizing metadata". Previously in the claim, it is defined that "camera-characterizing metadata" is associated to a digital camera image, and thus, not to a first training example which is a source camera image, whereas the present training can use a first training example. Therefore, it is not clear how the camera-characterizing metadata is obtained and how it's used in the training. Furthermore, it is not clear WHETHER the limitation "whilst conditioned on camera-characterizing metadata for generating the first enhanced image" MEANS: "whilst the image enhancement neural network is conditioned on camera-characterizing metadata desired / associated for the first enhanced image to be generated” -OR- "whilst the image enhancement neural network is conditioned on camera-characterizing metadata during the processing of generating the first enhanced image". Claim 18 (lines 23-25) recites the underlined limitation (with emphasis in bold): “training an image recovery neural network having a plurality of image recovery neural network parameters, using the other of the first and second training examples, to generate a first recovered image”; It is not clear what the characteristics of an output recovered image are, and how these differ from an input training example, and thus, how the training could be performed and what the purpose of the training is. Claim 18 (lines 26-30) recites the underlined limitation (with emphasis in bold): “and processing the further image sequentially using both the image enhancement neural network and the image recovery neural network to recreate a version of the further image, and updating the image enhancement neural network parameters and the image recovery neural network parameters to increase consistency between the further image and the recreated version of the further image”. There appears to be two alternative embodiments in claim 18, where: in case 1 - the image enhancement neural network is trained using a source camera image, while the image recovery neural network is trained using a digital camera image, AND in case 2 - the image enhancement neural network is trained using a digital camera image, while the image recovery neural network is trained using a source camera image. For each case, the purpose of training the image enhancement neural network and the purpose of training the image recovery neural network should be clarified (more specifically than "to generate a first enhanced image" and "to generate a first recovered image"), and the relationship between these training processes and the goal "to generate an enhanced image that gives the appearance of an image captured by a digital camera with characteristics defined by the image enhancement conditioning input" should be clearly defined in claim 18. Furthermore, each training step should clearly define what the input is, what the output is and, what kind of (loss, cost) function is linking the input and output. Consequently, since the inputs/outputs of the neural networks are different in the two alternative embodiments, it appears impossible to cover both embodiments with a single claim. Claims 19-30 are also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph for reasons discussed above and for respectively depending from rejected claim 18. Examiner notes that no prior art rejection can be made to claims 18-30 until Applicant clarifies/addresses the remarks discussed above. Closest Prior Art The prior art (cited on PTO-892) is considered pertinent to applicant's disclosure. Among these, the following references are considered to be the closest, collectively disclosing the state of the art concerned with capturing images with a digital camera and enhancing the captured images using training data for machine learning enabled image enhancement having desired user input exposure settings. BOUZARAA (US 20180241929) – applied to 35 USC 103 rejection, see Abstract, Fig. 1-4, and para [0012, 0020-22, 0028, 0033-37, 0040-41, 0050-51, 0065, 0096]. SHEN (US 20200051260) – applied to 35 USC 103 rejection, see Abstract, Fig. 1A, 1B & 6, and para [0087-89, 0164]. CHEN (US 20190188535) – see Abstract, Fig. 1-4, and para [0042-44, 0048]. NOTE: Examiner welcomes INTERVIEW(s) to discuss the instant application’s claimed invention as it corresponds to the specification embodiments, as well as, discussing the similarities/differences taught/not taught by prior art. In the interest of compact prosecution, Applicant’s arguments/amendments should not only address the cited closest art applied/relied on in the 35 USC 103 rejection (below), but also address the other cited closest art not applied/relied on. Claim Rejections – 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-14 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over BOUZARAA (US 20180241929) in view of SHEN (US 20200051260) -- hereafter, termed as shown “underlined”. As per INDEPENDENT CLAIM 1, BOUZARAA teaches a computer-implemented method (See computer implemented method per para [0002-0003, 0012] in view of computer devices 100/200 in Fig. 1-2 and method in Fig. 3-4), comprising: capturing an image with a camera of a mobile device (Examiner considers that the computer devices 100/200 (Fig. 1-2) may be a standalone device used with a plurality of other cameras OR a component of a camera that may also be used with other cameras per the embodiments anticipated in para [0012, 0034-35, 0040-41, 0050-51, 0065, 0096]. -- Furthermore, the taught camera captures a first “input” image which is used by the computer device to generate a second “output” image which has an exposure different from the exposure of the input image, para [0012]. See para [0033-37]: the computer device can select a preferred intensity transformer from a plurality of intensity transformers based on exposure settings of the input image, para [0033], AND the preferred intensity transformer can be selected based on exposure settings or exposure parameters, such as exposure value (EV) and international organization of standardization (ISO) of a digital camera, para [0035]); obtaining, from a user interface of the mobile device, user input data defining a set of one or more specified characteristics of a digital camera, wherein the set of one or more specified characteristics defines one or more characteristics of an exposure triangle of settings comprising an aperture setting, a shutter speed setting, and an ISO setting of the digital camera (See above discussion AND para [0036]: The preferred intensity transformer can also be selected based on exposure settings or exposure parameters, such as EV and ISO of the output image (for example, the desired output exposure settings). Preferably, the output exposure settings can be defined by a user. Therefore, the output exposure settings defined by a user correspond to specified characteristics of a simulated digital camera, wherein the exposure settings may be an exposure ratio, a ratio between ISO settings, a ratio between shutter speeds, a ratio between aperture values, and so on, para [0037]); determining, from the user input data, a conditioning tensor that represents features of the one or more specified characteristics (See para [0035-36]: the set of output exposure settings, comprising ISO setting, shutter speed and aperture value, makes up the said conditioning tensor.); processing the image captured with the camera of the mobile device using a trained image enhancement neural network (See Fig. 1-4 in view of para [0020-22]: the intensity transformer (Fig. 1: 110) comprises a convolutional neural network ... Using the CNN, the intensity transformation, to be carried out by the intensity transformer, can be learned from training images) whilst conditioned on the conditioning tensor (See para [0036]: The preferred intensity transformer can also be selected based on…the desired output exposure settings) to generate an enhanced image having the appearance of an image captured by the digital camera with the specified characteristics (As discussed above, the output exposure settings defined by a user correspond to specified characteristics of a simulated digital camera to generate the said “enhanced image having the appearance”, wherein the exposure settings may be an exposure ratio, a ratio between ISO settings, a ratio between shutter speeds, a ratio between aperture values, para [0037]). BOUZARAA’s disclosure is silent to (with emphasis in bold): “displaying the enhanced image on the mobile device for a user, storing the enhanced image, or transmitting the enhanced image”. However, the bolded feature of said underlined limitation was known in the related field of digital cameras and training a machine learning model for generating enhances images using an input image captured by a mobile camera. For example, see prior art SHEN, as disclosed by the invention title, Abstract, for the image enhancement system shown in Fig. 1A-B: mobile camera 114A/114B having captured input image(s) enhanced and output 118/130, which may be displayed (Fig. 6), stored, or transmitted as discussed in para [0087-89 & 0164]. Thus, when considering the collective knowledge bestowed by each applied prior art, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to COMBINE the teachings of SHEN into suitable modification with the teachings of BOUZARAA to produce Applicant’s claimed invention with the structural arrangement / functional configuration stated in said underlined limitation for the MOTIVATED REASON of enhancing the user experience and versatility of how enhanced image should be output in the analogous art of training a machine learning model for generating enhances images using an input image captured by a mobile digital camera. As per CLAIM 2, BOUZARAA in view of SHEN teaches the method of claim 1, wherein the set of one or more specified characteristics defined by the user input data comprises at least two settings of the exposure triangle of settings (BOUZARAA, see para [0035-37]). As per CLAIM 3, BOUZARAA in view of SHEN teaches the method of claim 2, wherein the set of one or more specified characteristics defined by the user input data comprises the three settings of the exposure triangle of settings (BOUZARAA, see para [0035-37]). As per CLAIM 4, BOUZARAA in view of SHEN teaches the method of claim 1, wherein the set of one or more specified characteristics defined by the user input data includes an exposure compensation setting to enable the enhanced image to be under- or over-exposed (BOUZARAA, see para [0037 & 0028]: “to generate a darker output image which corresponds to a shorter exposure time than the input image and a brighter output image which corresponds to a longer exposure time than the input image”). As per CLAIM 5, BOUZARAA in view of SHEN teaches the method of claim 1. Regarding the limitations: “wherein the digital camera is a camera comprising a camera body and an interchangeable lens, and wherein the specified characteristics of the camera defined by the user input data include a body type of the camera body or a lens type of the interchangeable lens”, SHEN taught image enhancement system 111 (see para [0090-91, 0100]) may be optimized for operation with a specific type of imaging sensor 124 of the device “camera body”. The image enhancement system 111 may be trained based on training images captured using a particular type or model of an imaging sensor. Image processing 128 performed by an imaging device may differ between users based on particular configurations and/or settings of the device. For example, different users may have the imaging device settings set differently based on preference and use. BOUZARAA in view of SHEN are silent to an interchangeable lens, and thus, silent to the specified characteristics including lens type of the interchangeable lens. However, these missing features would have been obvious. Therefore, Official Notice (MPEP § 2144.03) is taken that both the concepts and advantages of using specified characteristics of the camera defined by the user input data include a lens type of the interchangeable lens is well known and expected in the art. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use specified characteristics including lens type of the interchangeable lens for the MOTIVATED REASON of enhancing the user experience and accuracy of the generated enhanced images in the analogous art of a digital camera. As per CLAIM 6, BOUZARAA in view of SHEN teaches the method of claim 5, wherein the specified characteristics of the camera defined by the user input data comprise a make or model of the body type of the camera body, or of the lens type of the interchangeable lens (See the teachings over BOUZARAA in view of SHEN in further view of the Official Notice made in claim 5). As per CLAIM 7, BOUZARAA in view of SHEN teaches the method of claim 5 wherein the specified characteristics of the camera defined by the user input data include a focal length of the interchangeable lens (See the teachings over BOUZARAA in view of SHEN in further view of Official Notice made in claim 5). As per CLAIM 8, BOUZARAA in view of SHEN teaches the method of claim 1, wherein the enhanced image has an image resolution that is higher than a resolution of the image captured with the camera of the mobile device; and wherein using the trained image enhancement neural network to generate the enhanced image includes using the trained image enhancement neural network to add image details to the image captured with the camera of the mobile device (BOUZARAA, see para [0013]: “input image and output image might have different resolutions", thus providing an output image of higher resolution is a straightforward possibility and up-sampling therefore is a generally well-known processing”). As per CLAIM 9, BOUZARAA in view of SHEN teaches the method of claim 1. BOUZARAA in view of SHEN are silent to: “wherein the digital camera is a digital SLR (DSLR) camera”. However, these missing features would have been obvious. Therefore, Official Notice (MPEP § 2144.03) is taken that both the concepts and advantages of using a digital SLR (DSLR) camera is well known and expected in the art. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use a digital SLR (DSLR) for the MOTIVATED REASON of enhancing the user experience and versatility of the camera’s lens capabilities in the analogous art of a digital camera. As per CLAIM 10, BOUZARAA in view of SHEN teaches the method of claim 1. BOUZARAA in view of SHEN are silent to: “wherein the digital camera is a mirrorless interchangeable-lens camera (MILC)”. However, these missing features would have been obvious. Therefore, Official Notice (MPEP § 2144.03) is taken that both the concepts and advantages of using a digital camera that is a mirrorless interchangeable-lens camera (MILC) is well known and expected in the art. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use a mirrorless interchangeable-lens camera (MILC) for the MOTIVATED REASON of enhancing the user experience and versatility of the camera’s lens capabilities, and also camera compactness in the analogous art of a digital camera. As per CLAIM 11, BOUZARAA in view of SHEN teaches the method of claim 1, wherein the trained image enhancement neural network has been trained whilst conditioned on conditioning tensors defined by Exchangeable Image File (EXIF) data (BOUZARAA, disclosed training uses source images and target images of different exposure settings per para [0016 & 0021], wherein the exposure settings may be retrieved from EXIF data per para [0035]). As per CLAIM 12, BOUZARAA in view of SHEN teaches the method of claim 1, wherein the trained image enhancement neural network has been trained using an objective that does not require an image captured by a camera of the mobile device to be paired with a corresponding enhanced image (BOUZARAA, discloses that the source image and target image may be paired per para [0016]). As per CLAIM 13, BOUZARAA in view of SHEN teaches the method of claim 1, comprising performing the processing using the trained image enhancement neural network on, or controlled by, the mobile device (It is obvious to combine a known camera and associated image processing on a single device such as a smartphone/tablet or laptop given the teachings of BOUZARAA, disclosing the computer devices 100/200 (Fig. 1-2) may be a standalone device used with a plurality of other cameras OR a component of a camera that may also be used with other cameras per the embodiments anticipated in para [0012, 0034-35, 0040-41, 0050-51, 0065, 0096]. Furthermore, see teachings over SHEN, for the image enhancement system 111 shown in Fig. 1A-B & Fig. 6: mobile camera 114A/114B in view of para [0087-91 & 0164]). As per CLAIM 14, BOUZARAA in view of SHEN teaches the method of claim 1, wherein the user interface is a graphical user interface that simulates the appearance of the digital camera with settings to allow the user to define the characteristics of the exposure triangle (This limitation is obvious over BOUZARAA teachings to provide user input per para [0035-37], and in further view of SHEN’s teachings of a user interface per para [0087-91, 0192]). As per INDEPENDENT CLAIM 16, BOUZARAA in view of SHEN teaches a mobile device comprising at least one processor, and at least one storage device communicatively coupled to the at least one processor, wherein the at least one storage device stores instructions that, when executed by the at least one processor (BOUZARAA, para [0002-0003]), causes the at least one processor to perform operations to: “capture an image with a camera of a mobile device; obtain, from a user interface of the mobile device, user input data defining a set of one or more specified characteristics of a digital camera, wherein the set of one or more specified characteristics defines one or more characteristics of an exposure triangle of settings comprising an aperture setting, a shutter speed setting, and an ISO setting of the digital camera; determine, from the user input data, a conditioning tensor that represents features of the one or more specified characteristics; process the image captured with the camera of the mobile device using a trained image enhancement neural network whilst conditioned on the conditioning tensor to generate an enhanced image having the appearance of an image captured by the digital camera with the specified characteristics; and display the enhanced image on the mobile device for a user, store the enhanced image, or transmit the enhanced image” (The “underlined limitations” for the mobile device of claim 16 comprises similar limitations to the computer-implemented method recited in claim 1. Thus, for same reasons, the cited teachings over prior art combination discussed in claim 1 also applies to rejecting the “underlined limitations” in claim 16). Contact Information Any inquiry concerning this communication or earlier communications from the EXAMINER should be directed to AKSHAY TREHAN whose telephone number is (571) 270-5252. The examiner can normally be reached between the hours of 10am – 6pm during the weekdays Monday – Friday. Interviews with the examiner are available via telephone AND video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant may contact the examiner via telephone OR use the USPTO Automated Interview Request (AIR), which can be found at: http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TWYLER HASKINS can be reached on (571) 272-7406. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AKSHAY TREHAN/ Examiner, Art Unit 2639 /TWYLER L HASKINS/Supervisory Patent Examiner, Art Unit 2639
Read full office action

Prosecution Timeline

Sep 30, 2024
Application Filed
Feb 25, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598367
ELECTRONIC APPARATUS CAPABLE OF SETTING AN OPERATION MODE OF A COOLING UNIT AND CONTROL METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12574627
Smart Cameras Enabled by Assistant Systems
2y 5m to grant Granted Mar 10, 2026
Patent 12574636
ELECTRONIC DEVICE AND METHOD FOR CONTROLLING PREVIEW IMAGE TO DIFFERENT MAGNIFICATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12564131
Shielding Structure for Camera and Mower
2y 5m to grant Granted Mar 03, 2026
Patent 12563297
MOTOR, CAMERA MODULE, AND ELECTRONIC DEVICE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
95%
With Interview (+23.5%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 560 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month