Prosecution Insights
Last updated: April 19, 2026
Application No. 18/423,540

METHOD FOR SIMULATING IMAGES WITH ABERRATIONS BASED ON RAY TRACING

Final Rejection §103§112
Filed
Jan 26, 2024
Examiner
LE, JOHNNY TRAN
Art Unit
2614
Tech Center
2600 — Communications
Assignee
GM Global Technology Operations LLC
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
0%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
2 granted / 3 resolved
+4.7% vs TC avg
Minimal -67% lift
Without
With
+-66.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
32 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
6.1%
-33.9% vs TC avg
§103
65.9%
+25.9% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is in response to the amendment filed on 12/08/2025. Claims 1, 3-4, 6, 9, 11-13, 15, 17, and 19 have been amended, claims 2, 5, 7-8, and 18 were cancelled, and claims 20-25 are new additions. Claims 1, 3-4, 6, 9-17, and 19 remain rejected, and claims 20-25 are rejected. Response to Arguments 1 Applicant’s arguments with respect to independent claims 1, 13, and 15 filed on 12/08/2025, with respect to the rejection under 35 USC § 103 regarding that the prior art does not teach the following but not limited to "training a neural network for a vision system on a vehicle to identify the objects in the photo-realistic scenes based on the label for each of the objects and the location in three-dimensional space for each of the object.". This argument has been considered, but are moot due to new grounds of rejections. 2 Regarding arguments to claims 3-4, 6, 9-12, 14, 16-17, and 19, they directly/indirectly depend on independent claims 1, 13, and 15 respectively. Applicant does not argue anything other than independent claims 1, 13 and 15. The limitations in those claims, in conjunction with combination, was mostly previously established as explained, with a few changes being adjusted to connect with the changes of the independent claims, and some new grounds of rejections regarding some of the amended dependent claims. 3 Claims 2, 5, 7-8, and 18 have been cancelled by the applicant as mentioned previously, therefore they will not be reviewed further. 4 Claims 20-25 are new claims that were added, and are dependent of independent claim 1 respectfully. They are considered, but are moot under new grounds of rejection under 35 USC § 103. In addition, claim 24 is also rejected under 35 USC § 112. Claim Rejections - 35 USC § 112 5 The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. 6 Claim 24 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections - 35 USC § 103 7 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 8 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 9 Claim(s) 1 and 20-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over M. Mohammadikaji, S. Bergmann, J. Beyerer, J. Burke and C. Dachsbacher, "Sensor-Realistic Simulations for Evaluation and Planning of Optical Measurement Systems With an Application to Laser Triangulation," in IEEE Sensors Journal, vol. 20, no. 10, pp. 5336-5349, 15 May15, 2020, doi: 10.1109/JSEN.2020.2971683 (hereinafter Mohammadikaji) in view of Yang, X., Fu, Q., Elhoseiny, M., & Heidrich, W. (2023). Aberration-aware depth-from-focus. IEEE Transactions on Pattern Analysis and Machine Intelligence (hereinafter Yang) and Carlson, A., Skinner, K. A., Vasudevan, R., & Johnson-Roberson, M. (2018). Modeling camera effects to improve visual learning from synthetic data. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops (pp. 0-0) (hereinafter Carlson). 10 Regarding claim 1, Mohammadikaji teaches a method for simulating images with aberrations ([Section IV] reciting “To physically simulate the image formation process in a machine vision system, at least two more components need to be physically simulated in addition to the light transport simulation: the imaging optics, including relevant optical effects such as diffraction and aberrations, and the light sensitive sensor.”), comprising: ray tracing planes of objects to produce plane images ([Section III. B] reciting “Physically-based rendering serves the applications where rendering time and computational complexity are not the critical issues but the realism matters. As its name implies, the ultimate goal in realistic rendering is to synthesize images which look very similar to real images. To this end, these methods attempt to physically simulate multiple interactions of light and matter, which essentially compute a solution to the light transport equation (LTE).”; See also Fig. 2); PNG media_image1.png 338 958 media_image1.png Greyscale and a location in three-dimensional space for each of the objects ([Section III] reciting “To synthesize an image, generally two pieces of information are required: the visibility, referring to which object is visible from a given image location, and the shading, the color corresponding to the location.”); estimating a point spread function from the ray tracing ([Section V] reciting “The first part is in common with all photorealistic rendering approaches and concerns utilizing a proper ray-tracing approach. In the rest of this section, we elaborate the optics and the sensor simulation components.”; [Section V. B] reciting “In Fourier optics, an optical system is modeled as a linear system with a point spread function (PSF) [30]. Thus, the response of the optic can be simulated by a convolution, or equivalently, a multiplication in the frequency domain.”) generating synthetic three-dimensional rendered images based on the dataset (See Fig. 11); PNG media_image2.png 567 965 media_image2.png Greyscale generating synthetic imagery from the plane images ([Section IV] reciting “The most important aspect in synthesizing images of a measurement system is to generate reliable images.”) rendering photo-realistic scenes on the synthetic imagery ([Section VI: Simulation Result; Fig. 11] reciting “A cylinder head in a laser triangulation inspection. (left): real camera image, (right): simulated image using the proposed sensor-realistic simulation framework.”; See Fig. 11 below); and 11 Mohammadikaji does not explicitly teach feeding a ray-tracing simulator with one or more synthetic impulse images or scenes including one or more objects (although Mohammadikaji could teach this limitation ([Section IV] reciting “To metrologically evaluate a machine vision setup, such as in terms of the measurement uncertainty [27], the provided synthetic images must be physically-based with realistic prediction of the average intensities and the image noise. Noise in real imaging systems appears due to unwanted optical effects or as a result of the electronic and photon noise. To physically simulate the image formation process in a machine vision system, at least two more components need to be physically simulated in addition to the light transport simulation”), Yang can teach it further), wherein the synthetic impulse images or scenes include a label for each of the objects and a location in three-dimensional space for each of the objects; estimating a point spread function from the ray tracing of the plane images by generating a dataset of a two-dimensional point spread function per each depth over the three-dimensional space; … creating one or more perceived images based on the dataset; feeding the perceived images into a perception pipeline; generating synthetic imagery from the plane images and the perceived images; … and training a neural network for a vision system on a vehicle to identify the objects in the photo-realistic scenes based on the label for each of the objects and the location in three-dimensional space for each of the objects. 12 Yang teaches feeding a ray-tracing simulator with one or more synthetic impulse images or scenes including one or more objects ([Fig. 2] reciting “A MLP network is trained to represent the PSF for different positions and focus distances. We use ray tracing to calculate accurate PSF as the ground truth. The network takes as input the object positions (x,y,z) and focus distance fd, and produces a 2D matrix as output.”; [Section III B.; Titled “Aberration Simulator”] reciting “Our contribution lies in the accurate simulation of aberrations for the training data to improve on the classical thin lens model. The PSF characterizes the optical lens response to a point source of light. We can convolve the per-pixel PSF with the object image to simulate the image captured by a camera.”), and a location in three-dimensional space for each of the objects; estimating a point spread function from the ray tracing of the plane images by generating a dataset of a two-dimensional point spread function per each depth over the three-dimensional space ([Section IV. A] reciting “We train the PSF network for 400,000 iterations to overfit the lens. In each iteration, we randomly focus the lens to a distance of fd, and uniformly select 256 points in object space for training. The ground-truth PSFs are computed by tracing 1024 rays from each object point, and (x,y,z,fd) coordinates are provided to the network as input. We use a wavelength of 589 nm and set the PSF size to 11×11 sensor pixels.”; [Section IV. B] reciting “For comparison purposes, we also tested the low-rank PSF estimation model described in [43], [47]. In this model, we use ray tracing to calculate the PSF of the surrounding 8 positions and employ trilinear interpolation to obtain the center PSF. We divide the object space into 20 depths, with 64 grids in each depth plane. The PSFs of these positions are calculated and used for querying. As depicted in Fig. 3, the low-rank PSF model can estimate PSFs similar to the ground truth. Since the PSF is slowly varying in the object space, using enough sampled PSFs for querying can yield promising results.”); … creating one or more perceived images based on the dataset; feeding the perceived images into a perception pipeline; generating synthetic imagery from the plane images and the perceived images (See Fig. 1); and PNG media_image3.png 274 951 media_image3.png Greyscale training a neural network for a vision system on a vehicle to identify the objects in the photo-realistic scenes and the location in three-dimensional space for each of the objects ([Section I.] reciting “To overcome the domain gap resulting from optical aberrations, we introduce aberration-aware training (AAT) that enables the network to learn these optical aberrations during the training. Our AAT method consists of a lightweight point spread function (PSF) network and a re-rendering process to simulate aberrated training images… We propose a lightweight network that can represent the PSF of a real lens at different focus distances and object positions. This PSF network can then simulate aberrated and realistic images for aberration-aware training.”; [Section IV. A] reciting “We train the PSF network for 400,000 iterations to overfit the lens. In each iteration, we randomly focus the lens to a distance of fd, and uniformly select 256 points in object space for training.”) 13 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Mohammadikaji) to incorporate the teachings of Yang to provide a method that incorporates a type of simulator, a specific type of a point spread function, and a neural network for ray-tracing the synthetic images from the teachings of Mohammadikaji. Doing so would enable the network to learn to extract more accurate image features in the presence of optical aberrations as stated by Yang ([Section VII.] recited). 14 Mohammadikaji in view of Yang does not explicitly teach … wherein the synthetic impulse images or scenes include a label for each of the objects and a location in three-dimensional space for each of the objects; … and training a neural network for a vision system on a vehicle to identify the objects in the photo-realistic scenes based on the label for each of the objects and the location in three-dimensional space for each of the objects. 15 Carlson teaches … wherein the synthetic impulse images or scenes include a label for each of the objects and a location in three-dimensional space for each of the objects; … and training a neural network for a vision system on a vehicle to identify the objects in the photo-realistic scenes based on the label for each of the objects and the location in three-dimensional space for each of the objects ([Section 3.6 “Generating Augmented Training Data”] reciting “We use the original image labels as the labels for the augmented data. Pixel artifacts from cameras, like chromatic aberration and blur, make the object boundaries noisy. Thus, the original target labels are used to ensure that the network makes robust and accurate predictions in the presence of camera effects”). 16 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Mohammadikaji in view of Yang) to incorporate the teachings of Carlson to provide labels for the synthetic images/objects for the aberration process and PSF networks provided by the teachings of Mohammadikaji in view of Yang. Doing so would ensure that the network makes robust and accurate predictions in the presence of camera effects as stated by Carlson ([Section 3.6] recited). 17 Regarding claim 20, Mohammadikaji in view of Yang and Carlson teaches the method of claim 1, wherein the synthetic three-dimensional rendered images each include a per-pixel depth map (Yang; [Section III. B “Aberration Simulator”] reciting “Our contribution lies in the accurate simulation of aberrations for the training data to improve on the classical thin lens model. The PSF characterizes the optical lens response to a point source of light. We can convolve the per-pixel PSF with the object image to simulate the image captured by a camera… The k2-channel output is then reshaped into a k×k 2D tensor. After training, we fix the parameters of the PSF network and use it to estimate the PSF for various object positions and focus distances. Then, we can use the per-pixel PSF to render aberrated images for the subsequent depth estimation task.”). 18 Regarding claim 21, Mohammadikaji in view of Yang and Carlson teaches the method of claim 20 (see claims 1 and 20 rejections above), including filtering groups of pixels in the synthetic three-dimensional rendered images with a point spread function filter according to their corresponding depth extracted from the per-pixel depth map (Yang; [Section I] reciting “Our AAT method consists of a lightweight point spread function (PSF) network and a re-rendering process to simulate aberrated training images. First, we compute the spatially-varying PSF of an optical lens using ray tracing [30], [31], and train a multilayer perceptron (MLP) to represent it. Once trained, the network can efficiently estimate the PSF for different object positions and focus distances. Next, we render training images to apply off-axis aberrations.”; [Section III. B] reciting “Then, we can use the per-pixel PSF to render aberrated images for the subsequent depth estimation task.”). 19 Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over M. Mohammadikaji, S. Bergmann, J. Beyerer, J. Burke and C. Dachsbacher, "Sensor-Realistic Simulations for Evaluation and Planning of Optical Measurement Systems With an Application to Laser Triangulation," in IEEE Sensors Journal, vol. 20, no. 10, pp. 5336-5349, 15 May15, 2020, doi: 10.1109/JSEN.2020.2971683 (hereinafter Mohammadikaji) in view of Yang, X., Fu, Q., Elhoseiny, M., & Heidrich, W. (2023). Aberration-aware depth-from-focus. IEEE Transactions on Pattern Analysis and Machine Intelligence (hereinafter Yang) and Carlson, A., Skinner, K. A., Vasudevan, R., & Johnson-Roberson, M. (2018). Modeling camera effects to improve visual learning from synthetic data. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops (pp. 0-0) (hereinafter Carlson) as of claim 1, further in view of Jia et al. (US 20220198611 A1). 20 Regarding claim 3, Mohammadikaji in view of Yang and Carlson teaches the method of claim 1 (see claim 1 rejection above), further comprising: implementing a blurring effect including at least one aberration (Mohammadikaji; [Section V. A; 1b)] reciting “Aberrations are generally either chromatic or monochromatic [29]. Chromatic aberrations are caused by the varying refractive index of a lens for different wavelengths. Monochromatic aberrations can be caused by the lens geometry, lens misconfiguration, or they can be due to the oversimplifying assumptions typically made for optical derivations. A typical example of the latter is the paraxial approximation, in which one assumes that rays entering the optical system hold a very small angle to the optical axis. Rays not satisfying this assumption cause aberrations. Typically, choosing a wider aperture strengthens the aberration effects, mainly because more non-paraxial rays enter the optical system. A familiar aberration is defocus, in which the center of the spherical wave at the exit pupil is either in front or behind the image plane, yielding a blurred image.”); and implementing a noise tradeoff (Mohammadikaji; [Section IV] reciting “Noise in real imaging systems appears due to unwanted optical effects or as a result of the electronic and photon noise. To physically simulate the image formation process in a machine vision system, at least two more components need to be physically simulated in addition to the light transport simulation: the imaging optics, including relevant optical effects such as diffraction and aberrations, and the light sensitive sensor. Although sensor simulation is not a totally new concept in computer graphics, this paper proposes using an industry standard model for simulating the sensors and their corresponding intensity noise, which can cover a wide range of imaging sensors in the market.”) 21 Mohammadikaji in view of Yang and Carlson does not explicitly teach implementing a noise tradeoff including increasing an information rate, a signal-to-noise ratio and an allocated bandwidth. 22 Jia teaches including increasing an information rate, a signal-to-noise ratio and an allocated bandwidth ([Abstract] reciting “In a method of filtering an image from data received from a CMOS camera, image data is loaded by a computational device from the camera.”; [0024] reciting “This is even more relevant in sCMOS sensors, where the increased signal capacity and much lower readout noise comes at the expenses of the fixed pattern noise due to pixel gain fluctuations.”; [0049] reciting “If we call H(f) the modulation transfer function (MTF) of the system, we have that: … where N.sub.0 is a constant value that represents the noise power per unit bandwidth.”; [0071] reciting “We validated the performance of ACsN under various sampling rates normally adopted for fluorescence microscopy. In practice, a sampling rate close to the Nyquist criterion represents a good tradeoff between signal to noise ratio (SNR) and detail preservation. Here, examining numerically and experimentally across a wide range of sampling rates, we demonstrated the viability of ACsN for low SNR with over-sampling and no noticeable loss of signals with under-sampling.”). 23 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Mohammadikaji in view of Yang and Carlson) to incorporate the teachings of Jia to provide additional elements like bandwidth, signal to noise ratio, and a type of information rate for the noise tradeoff teachings of Mohammadikaji in view of Yang and Carlson. Doing so would allow detail preservation as stated by Jia ([0071] recited). 24 Claim(s) 4 and 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over M. Mohammadikaji, S. Bergmann, J. Beyerer, J. Burke and C. Dachsbacher, "Sensor-Realistic Simulations for Evaluation and Planning of Optical Measurement Systems With an Application to Laser Triangulation," in IEEE Sensors Journal, vol. 20, no. 10, pp. 5336-5349, 15 May15, 2020, doi: 10.1109/JSEN.2020.2971683 (hereinafter Mohammadikaji) in view of Yang, X., Fu, Q., Elhoseiny, M., & Heidrich, W. (2023). Aberration-aware depth-from-focus. IEEE Transactions on Pattern Analysis and Machine Intelligence (hereinafter Yang), Carlson, A., Skinner, K. A., Vasudevan, R., & Johnson-Roberson, M. (2018). Modeling camera effects to improve visual learning from synthetic data. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops (pp. 0-0) (hereinafter Carlson), and Jia et al. (US 20220198611 A1) as of claims 1 and 3, further in view of Morrical et al. (US 20230066636 A1). 25 Regarding claim 4, Mohammadikaji in view of Yang, Carlson, and Jia teaches the method of claim 3 (see claims 1 and 3 rejections above), but does not explicitly teach further comprising: providing an actual imaging sensor of a vehicle, wherein the imaging sensor of the vehicle is located within a cabin of the vehicle. 26 Morrical teaches providing an actual imaging sensor of a vehicle, wherein the imaging sensor of the vehicle is located within a cabin of the vehicle ([0168] reciting “Such components can be used to render images using ray tracing-based importance sampling, which can be accelerated through hardware.”; [0182] reciting “…TPU(s) may be accelerators configured to, and optimized for, performing image processing functions (e.g., for CNNs, RCNNs, etc.)… a CNN for object identification and detection using data from camera sensors; a CNN for distance estimation using data from camera sensors; a CNN for emergency vehicle detection and identification and detection using data from microphones 1696; a CNN for facial recognition and vehicle owner identification using data from camera sensors; and/or a CNN for security and/or safety related events.”; [0201] reciting “In at least one embodiment, processor(s) 1610 may include a video image compositor that may be a processing block (e.g., implemented on a microprocessor) that implements video post-processing functions needed by a video playback application to produce final image for player window. In at least one embodiment, video image compositor may perform lens distortion correction on wide-view camera(s) 1670, surround camera(s) 1674, and/or on in-cabin monitoring camera sensor(s). In at least one embodiment, in-cabin monitoring camera sensor(s) are preferably monitored by a neural network running on another instance of SoC(s) 1604, configured to identify in cabin events and respond accordingly. In at least one embodiment, an in-cabin system may perform, without limitation, lip reading to activate cellular service and place a phone call, dictate emails, change vehicle's destination, activate or change vehicle's infotainment system and settings, or provide voice-activated web surfing. In at least one embodiment, certain functions are available to driver when vehicle is operating in an autonomous mode and are disabled otherwise.”). 27 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Mohammadikaji in view of Yang, Carlson, and Jia) to incorporate the teachings of Morrical to provide a method that utilizes a type of image sensor that can be used for within the cabins of specific vehicles, which the sensor can utilize ray tracing technology similar to what was taught by Mohammadikaji in view of Yang, Carlson, and Jia. Doing so would allow the sensor to be configured to identify in-cabin events and respond accordingly as stated by Morrical ([0201] recited). 28 Regarding claim 6, Mohammadikaji in view of Yang, Carlson, Jia, and Morrical teaches the method of claim 4 (see claims 1 and 3-4 rejections above), further comprising: incorporating optical effects into the synthetic imagery generated (Mohammadikaji; [Section IV] reciting “To physically simulate the image formation process in a machine vision system, at least two more components need to be physically simulated in addition to the light transport simulation: the imaging optics, including relevant optical effects such as diffraction and aberrations, and the light sensitive sensor.”). 29 Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over M. Mohammadikaji, S. Bergmann, J. Beyerer, J. Burke and C. Dachsbacher, "Sensor-Realistic Simulations for Evaluation and Planning of Optical Measurement Systems With an Application to Laser Triangulation," in IEEE Sensors Journal, vol. 20, no. 10, pp. 5336-5349, 15 May15, 2020, doi: 10.1109/JSEN.2020.2971683 (hereinafter Mohammadikaji) in view of Yang, X., Fu, Q., Elhoseiny, M., & Heidrich, W. (2023). Aberration-aware depth-from-focus. IEEE Transactions on Pattern Analysis and Machine Intelligence (hereinafter Yang), Carlson, A., Skinner, K. A., Vasudevan, R., & Johnson-Roberson, M. (2018). Modeling camera effects to improve visual learning from synthetic data. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops (pp. 0-0) (hereinafter Carlson), Jia et al. (US 20220198611 A1), and Morrical et al. (US 20230066636 A1) as of claims 1, 3-4, and 6, further in view of Curzan et al. (US 20230041139 A1). 30 Regarding claim 9, Mohammadikaji in view of Yang, Carlson, Jia, and Morrical teaches the method of claim 6 (see claims 1, 3-4, and 6 rejections above), but does not explicitly teach further comprising: generating the point spread function sampled at multiple locations in one or more frames. 31 Curzan teaches generating the point spread function sampled at multiple locations in one or more frames ([0081] reciting “A point spread function (PSF) blur is determined based on a PSF of a lens for an infrared detection system, at 802. The PSF blur and infrared data may be optionally up-sampled, at 804. In some embodiments, the determination of the PSF blur at 802 and the up-sampling of the PSF blur may be determined once for an infrared detection system. In some embodiments, the up-sampled PSF and infrared data pixel values may be interpolated using linear, cubic, or other method … the data may be compressed and/or otherwise processed. In some embodiments, at least a portion of the method is repeated for each frame of an infrared video. For example, 804 (if used), 806, 808 (if used) and 810 may be repeated for additional frames.”). 32 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Mohammadikaji in view of Yang, Carlson, Jia, and Morrical) to incorporate the teachings of Curzan to provide a method that can utilize the point spread function taught by Mohammadikaji in view of Yang, Carlson, Jia, and Morrical to sample many locations in more frames as taught by Curzan. Doing so would allow the viewing to be less visually pixelated to an observer as stated by Curzan ([0081] recited). 33 Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over M. Mohammadikaji, S. Bergmann, J. Beyerer, J. Burke and C. Dachsbacher, "Sensor-Realistic Simulations for Evaluation and Planning of Optical Measurement Systems With an Application to Laser Triangulation," in IEEE Sensors Journal, vol. 20, no. 10, pp. 5336-5349, 15 May15, 2020, doi: 10.1109/JSEN.2020.2971683 (hereinafter Mohammadikaji) in view of Yang, X., Fu, Q., Elhoseiny, M., & Heidrich, W. (2023). Aberration-aware depth-from-focus. IEEE Transactions on Pattern Analysis and Machine Intelligence (hereinafter Yang), and Carlson, A., Skinner, K. A., Vasudevan, R., & Johnson-Roberson, M. (2018). Modeling camera effects to improve visual learning from synthetic data. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops (pp. 0-0) (hereinafter Carlson) as of claim 1, further in view of Morrical et al. (US 20230066636 A1). 34 Regarding claim 10, Mohammadikaji in view of Yang and Carlson teaches the method of claim 1 (see claim 1 rejection above), but does not explicitly teach further comprising: providing an actual imaging sensor of a vehicle, wherein the imaging sensor of the vehicle is located within a cabin of the vehicle. 35 Morrical teaches providing an actual imaging sensor of a vehicle, wherein the imaging sensor of the vehicle is located within a cabin of the vehicle ([0168] reciting “Such components can be used to render images using ray tracing-based importance sampling, which can be accelerated through hardware.”; [0182] reciting “…TPU(s) may be accelerators configured to, and optimized for, performing image processing functions (e.g., for CNNs, RCNNs, etc.)… a CNN for object identification and detection using data from camera sensors; a CNN for distance estimation using data from camera sensors; a CNN for emergency vehicle detection and identification and detection using data from microphones 1696; a CNN for facial recognition and vehicle owner identification using data from camera sensors; and/or a CNN for security and/or safety related events.”; [0201] reciting “In at least one embodiment, processor(s) 1610 may include a video image compositor that may be a processing block (e.g., implemented on a microprocessor) that implements video post-processing functions needed by a video playback application to produce final image for player window. In at least one embodiment, video image compositor may perform lens distortion correction on wide-view camera(s) 1670, surround camera(s) 1674, and/or on in-cabin monitoring camera sensor(s). In at least one embodiment, in-cabin monitoring camera sensor(s) are preferably monitored by a neural network running on another instance of SoC(s) 1604, configured to identify in cabin events and respond accordingly. In at least one embodiment, an in-cabin system may perform, without limitation, lip reading to activate cellular service and place a phone call, dictate emails, change vehicle's destination, activate or change vehicle's infotainment system and settings, or provide voice-activated web surfing. In at least one embodiment, certain functions are available to driver when vehicle is operating in an autonomous mode and are disabled otherwise.”). 36 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Mohammadikaji in view of Yang and Carlson) to incorporate the teachings of Morrical to provide a method that utilizes a type of image sensor that can be used for within the cabins of specific vehicles, which the sensor can utilize ray tracing technology similar to what was taught by Mohammadikaji in view of Yang and Carlson. Doing so would allow the sensor to be configured to identify in-cabin events and respond accordingly as stated by Morrical ([0201] recited). 37 Claim(s) 11-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over M. Mohammadikaji, S. Bergmann, J. Beyerer, J. Burke and C. Dachsbacher, "Sensor-Realistic Simulations for Evaluation and Planning of Optical Measurement Systems With an Application to Laser Triangulation," in IEEE Sensors Journal, vol. 20, no. 10, pp. 5336-5349, 15 May15, 2020, doi: 10.1109/JSEN.2020.2971683 (hereinafter Mohammadikaji) in view of Yang, X., Fu, Q., Elhoseiny, M., & Heidrich, W. (2023). Aberration-aware depth-from-focus. IEEE Transactions on Pattern Analysis and Machine Intelligence (hereinafter Yang), and Carlson, A., Skinner, K. A., Vasudevan, R., & Johnson-Roberson, M. (2018). Modeling camera effects to improve visual learning from synthetic data. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops (pp. 0-0) (hereinafter Carlson) as of claim 1, further in view of of Lehmann, M., Wittpahl, C., Zakour, H. B., & Braun, A. (2019). Modeling realistic optical aberrations to reuse existing drive scene recordings for autonomous driving validation. Journal of Electronic Imaging, 28(1), 013005-013005 (hereinafter Lehmann). 38 Regarding claim 11, Mohammadikaji in view of Yang and Carlson teaches the method of claim 1 (see claim 1 rejection above), further comprising: incorporating optical effects into the synthetic imagery generated (Mohammadikaji; [Section IV] reciting “To physically simulate the image formation process in a machine vision system, at least two more components need to be physically simulated in addition to the light transport simulation: the imaging optics, including relevant optical effects such as diffraction and aberrations, and the light sensitive sensor.”), 39 Mohammadikaji in view of Yang and Carlson does not explicitly teach incorporating optical effects into the synthetic imagery generated, wherein test-drives are not needed for data collection due to the synthetic imagery generated. 40 Lehmann teaches wherein test-drives are not needed for data collection due to the synthetic imagery generated ([Abstract] reciting “With this model, it is possible to reuse existing recordings, with the potential to avoid millions of test drive miles.”; [Section 1] reciting “…and then convolving with a different set of optical properties would dramatically reduce the number of necessary test drives, while at the same time increasing testing depth and function robustness.”; See also Fig. 10). PNG media_image4.png 473 618 media_image4.png Greyscale 41 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Mohammadikaji in view of Yang and Carlson) to incorporate the teachings of Lehmann to provide a confirmation wherein test drives are not needed using the optical effects taught by Mohammadikaji in view of Yang and Carlson. Doing so would have a valid method to validate the functional and safety limits of camera-based advanced driver assistance systems as stated by Lehmann ([Abstract] recited). 42 Regarding claim 12, Mohammadikaji in view of Yang and Carlson teaches the method of claim 11 (see claims 1 and 11 rejections above), further comprising: rendering one or more photo-realistic scenes on the synthetic imagery generated (Mohammadikaji; [Section III. B] reciting “Physically-based rendering serves the applications where rendering time and computational complexity are not the critical issues but the realism matters. As its name implies, the ultimate goal in realistic rendering is to synthesize images which look very similar to real images.”; [Section IV] reciting “Although these methods often sacrifice realism to achieve a high frame rate, they already contain accurate information for evaluating the field of view, geometry of the imaged scene, resolution, visible surface area, and reachability of direct light to the surface.”). 43 Claim(s) 13 and 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over M. Mohammadikaji, S. Bergmann, J. Beyerer, J. Burke and C. Dachsbacher, "Sensor-Realistic Simulations for Evaluation and Planning of Optical Measurement Systems With an Application to Laser Triangulation," in IEEE Sensors Journal, vol. 20, no. 10, pp. 5336-5349, 15 May15, 2020, doi: 10.1109/JSEN.2020.2971683 (hereinafter Mohammadikaji) in view of Yang, X., Fu, Q., Elhoseiny, M., & Heidrich, W. (2023). Aberration-aware depth-from-focus. IEEE Transactions on Pattern Analysis and Machine Intelligence (hereinafter Yang), Carlson, A., Skinner, K. A., Vasudevan, R., & Johnson-Roberson, M. (2018). Modeling camera effects to improve visual learning from synthetic data. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops (pp. 0-0) (hereinafter Carlson), and Morrical et al. (US 20230066636 A1). 44 Regarding claim 13, Mohammadikaji teaches ray tracing planes of objects to produce plane images ([Section III. B] reciting “Physically-based rendering serves the applications where rendering time and computational complexity are not the critical issues but the realism matters. As its name implies, the ultimate goal in realistic rendering is to synthesize images which look very similar to real images. To this end, these methods attempt to physically simulate multiple interactions of light and matter, which essentially compute a solution to the light transport equation (LTE).”; [See also Fig. 2]); PNG media_image1.png 338 958 media_image1.png Greyscale and a location in three-dimensional space for each of the objects ([Section III] reciting “To synthesize an image, generally two pieces of information are required: the visibility, referring to which object is visible from a given image location, and the shading, the color corresponding to the location.”); estimating a point spread function from the ray tracing ([Section V] reciting “The first part is in common with all photorealistic rendering approaches and concerns utilizing a proper ray-tracing approach. In the rest of this section, we elaborate the optics and the sensor simulation components.”; [Section V. B] reciting “In Fourier optics, an optical system is modeled as a linear system with a point spread function (PSF) [30]. Thus, the response of the optic can be simulated by a convolution, or equivalently, a multiplication in the frequency domain.”) generating synthetic three-dimensional rendered images based on the dataset (See Fig. 11); PNG media_image2.png 567 965 media_image2.png Greyscale implementing a blurring effect; implementing a noise tradeoff [Section V. A; 1b)] reciting “A familiar aberration is defocus, in which the center of the spherical wave at the exit pupil is either in front or behind the image plane, yielding a blurred image.”; [Section IV] reciting “Noise in real imaging systems appears due to unwanted optical effects or as a result of the electronic and photon noise. To physically simulate the image formation process in a machine vision system, at least two more components need to be physically simulated in addition to the light transport simulation: the imaging optics, including relevant optical effects such as diffraction and aberrations, and the light sensitive sensor. Although sensor simulation is not a totally new concept in computer graphics, this paper proposes using an industry standard model for simulating the sensors and their corresponding intensity noise, which can cover a wide range of imaging sensors in the market.”); generating synthetic imagery from the plane images ([Section IV] reciting “The most important aspect in synthesizing images of a measurement system is to generate reliable images.”) incorporating optical effects into the synthetic imagery generated ([Section IV] reciting “To physically simulate the image formation process in a machine vision system, at least two more components need to be physically simulated in addition to the light transport simulation: the imaging optics, including relevant optical effects such as diffraction and aberrations, and the light sensitive sensor.”); rendering one or more photo-realistic scenes on the synthetic imagery generated ([Section VI: Similation Result; Fig. 11] reciting “A cylinder head in a laser triangulation inspection. (left): real camera image, (right): simulated image using the proposed sensor-realistic simulation framework.”; See Fig. 11 below), 45 Mohammadikaji does not explicitly teach a non-transitory computer-readable storage medium embodying programmed instructions which, when executed by a processor, are operable for performing a method comprising: … feeding a ray-tracing simulator with one or more synthetic impulse images or scenes including one or more objects (although Mohammadikaji could teach this limitation ([Section IV] reciting “To metrologically evaluate a machine vision setup, such as in terms of the measurement uncertainty [27], the provided synthetic images must be physically-based with realistic prediction of the average intensities and the image noise. Noise in real imaging systems appears due to unwanted optical effects or as a result of the electronic and photon noise. To physically simulate the image formation process in a machine vision system, at least two more components need to be physically simulated in addition to the light transport simulation”), Yang can teach it further), wherein the synthetic impulse images or scenes include a label for each of the objects and a location in three-dimensional space for each of the objects; estimating a point spread function from the ray tracing of the plane images by generating a dataset of a two-dimensional point spread function per each depth over the three-dimensional space; … creating one or more perceived images based on the dataset; … feeding the perceived images into a perception pipeline; generating synthetic imagery from the plane images and the perceived images; … rendering one or more photo-realistic scenes on the synthetic imagery generated wherein an imaging sensor is located within a cabin of a vehicle; feeding a ray-tracing simulator with one more synthetic impulse images or scenes (although Mohammadikaji could teach this limitation (see earlier statement), Yang can teach it further); and training a neural network for a vision system on a vehicle to identify the objects in the photo-realistic scenes based on the label for each of the objects and the location in three-dimensional space for each of the objects. 46 Yang teaches feeding a ray-tracing simulator with one or more synthetic impulse images or scenes including one or more objects ([Fig. 2] reciting “A MLP network is trained to represent the PSF for different positions and focus distances. We use ray tracing to calculate accurate PSF as the ground truth. The network takes as input the object positions (x,y,z) and focus distance fd, and produces a 2D matrix as output.”; [Section III B.; Titled “Aberration Simulator”] reciting “Our contribution lies in the accurate simulation of aberrations for the training data to improve on the classical thin lens model. The PSF characterizes the optical lens response to a point source of light. We can convolve the per-pixel PSF with the object image to simulate the image captured by a camera.”), and a location in three-dimensional space for each of the objects; estimating a point spread function from the ray tracing of the plane images by generating a dataset of a two-dimensional point spread function per each depth over the three-dimensional space ([Section IV. A] reciting “We train the PSF network for 400,000 iterations to overfit the lens. In each iteration, we randomly focus the lens to a distance of fd, and uniformly select 256 points in object space for training. The ground-truth PSFs are computed by tracing 1024 rays from each object point, and (x,y,z,fd) coordinates are provided to the network as input. We use a wavelength of 589 nm and set the PSF size to 11×11 sensor pixels.”; [Section IV. B] reciting “For comparison purposes, we also tested the low-rank PSF estimation model described in [43], [47]. In this model, we use ray tracing to calculate the PSF of the surrounding 8 positions and employ trilinear interpolation to obtain the center PSF. We divide the object space into 20 depths, with 64 grids in each depth plane. The PSFs of these positions are calculated and used for querying. As depicted in Fig. 3, the low-rank PSF model can estimate PSFs similar to the ground truth. Since the PSF is slowly varying in the object space, using enough sampled PSFs for querying can yield promising results.”); … creating one or more perceived images based on the dataset; feeding the perceived images into a perception pipeline; generating synthetic imagery from the plane images and the perceived images (See Fig. 1); PNG media_image3.png 274 951 media_image3.png Greyscale …feeding a ray-tracing simulator with one more synthetic impulse images or scenes ([Fig. 2] reciting “A MLP network is trained to represent the PSF for different positions and focus distances. We use ray tracing to calculate accurate PSF as the ground truth. The network takes as input the object positions (x,y,z) and focus distance fd, and produces a 2D matrix as output.”; [Section III B.; Titled “Aberration Simulator”] reciting “Our contribution lies in the accurate simulation of aberrations for the training data to improve on the classical thin lens model. The PSF characterizes the optical lens response to a point source of light. We can convolve the per-pixel PSF with the object image to simulate the image captured by a camera… We use ReLU activation functions after each input and hidden layer and a Sigmoid activation function after the output layer. The k2-channel output is then reshaped into a k×k 2D tensor. After training, we fix the parameters of the PSF network and use it to estimate the PSF for various object positions and focus distances. Then, we can use the per-pixel PSF to render aberrated images for the subsequent depth estimation task.”); and training a neural network for a vision system on a vehicle to identify the objects in the photo-realistic scenes and the location in three-dimensional space for each of the objects ([Section I.] reciting “To overcome the domain gap resulting from optical aberrations, we introduce aberration-aware training (AAT) that enables the network to learn these optical aberrations during the training. Our AAT method consists of a lightweight point spread function (PSF) network and a re-rendering process to simulate aberrated training images… We propose a lightweight network that can represent the PSF of a real lens at different focus distances and object positions. This PSF network can then simulate aberrated and realistic images for aberration-aware training.”; [Section IV. A] reciting “We train the PSF network for 400,000 iterations to overfit the lens. In each iteration, we randomly focus the lens to a distance of fd, and uniformly select 256 points in object space for training.”). 47 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Mohammadikaji) to incorporate the teachings of Yang to provide a method that incorporates a type of simulator, a specific type of a point spread function, and a neural network for ray-tracing the synthetic images from the teachings of Mohammadikaji. Doing so would enable the network to learn to extract more accurate image features in the presence of optical aberrations as stated by Yang ([Section VII.] recited). 48 Mohammadikaji in view of Yang does not explicitly teach … a non-transitory computer-readable storage medium embodying programmed instructions which, when executed by a processor, are operable for performing a method comprising: wherein the synthetic impulse images or scenes include a label for each of the objects and a location in three-dimensional space for each of the objects; … rendering one or more photo-realistic scenes on the synthetic imagery generated wherein an imaging sensor is located within a cabin of a vehicle; … and training a neural network for a vision system on a vehicle to identify the objects in the photo-realistic scenes based on the label for each of the objects and the location in three-dimensional space for each of the objects. 49 Carlson teaches … wherein the synthetic impulse images or scenes include a label for each of the objects and a location in three-dimensional space for each of the objects; … and training a neural network for a vision system on a vehicle to identify the objects in the photo-realistic scenes based on the label for each of the objects and the location in three-dimensional space for each of the objects ([Section 3.6 “Generating Augmented Training Data”] reciting “We use the original image labels as the labels for the augmented data. Pixel artifacts from cameras, like chromatic aberration and blur, make the object boundaries noisy. Thus, the original target labels are used to ensure that the network makes robust and accurate predictions in the presence of camera effects”). 50 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Mohammadikaji in view of Yang) to incorporate the teachings of Carlson to provide labels for the synthetic images/objects for the aberration process and PSF networks provided by the teachings of Mohammadikaji in view of Yang. Doing so would ensure that the network makes robust and accurate predictions in the presence of camera effects as stated by Carlson ([Section 3.6] recited). 51 Mohammadikaji in view of Yang and Carlson does not explicitly teach a non-transitory computer-readable storage medium embodying programmed instructions which, when executed by a processor, are operable for performing a method comprising: … rendering one or more photo-realistic scenes on the synthetic imagery generated wherein an imaging sensor is located within a cabin of a vehicle. 52 Morrical teaches a non-transitory computer-readable storage medium embodying programmed instructions which, when executed by a processor, are operable for performing a method comprising ([0256] reciting “In at least one embodiment, a computer-readable storage medium is a non-transitory computer-readable storage medium that excludes transitory signals (e.g., a propagating transient electric or electromagnetic transmission) but includes non-transitory data storage circuitry (e.g., buffers, cache, and queues) within transceivers of transitory signals. In at least one embodiment, code (e.g., executable code or source code) is stored on a set of one or more non-transitory computer-readable storage media having stored thereon executable instructions (or other memory to store executable instructions) that, when executed (i.e., as a result of being executed) by one or more processors of a computer system, cause computer system to perform operations described herein.”): … to render one or more photo-realistic scenes on the generated synthetic imagery, wherein an imaging sensor is located within a cabin of the vehicle [0201] reciting “In at least one embodiment, processor(s) 1610 may include a video image compositor that may be a processing block (e.g., implemented on a microprocessor) that implements video post-processing functions needed by a video playback application to produce final image for player window. In at least one embodiment, video image compositor may perform lens distortion correction on wide-view camera(s) 1670, surround camera(s) 1674, and/or on in-cabin monitoring camera sensor(s). In at least one embodiment, in-cabin monitoring camera sensor(s) are preferably monitored by a neural network running on another instance of SoC(s) 1604, configured to identify in cabin events and respond accordingly. In at least one embodiment, an in-cabin system may perform, without limitation, lip reading to activate cellular service and place a phone call, dictate emails, change vehicle's destination, activate or change vehicle's infotainment system and settings, or provide voice-activated web surfing. In at least one embodiment, certain functions are available to driver when vehicle is operating in an autonomous mode and are disabled otherwise.”). 53 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Mohammadikaji in view of Farivar) to incorporate the teachings of Morrical to provide a non-transitory computer readable medium for executing various instructions from the teachings of Mohammadikaji, as well as to provide a method that utilizes a type of image sensor that can be used for cabins of vehicles, which the sensor functions can utilize ray tracing technology similar to what was taught by Mohammadikaji in view of Farivar. Doing so would allow the sensor to be configured to identify in cabin events and respond accordingly as stated by Morrical ([0201] recited). 54 Regarding claim 15, Mohammadikaji teaches a method for simulating images with aberrations ([Section IV] reciting “To physically simulate the image formation process in a machine vision system, at least two more components need to be physically simulated in addition to the light transport simulation: the imaging optics, including relevant optical effects such as diffraction and aberrations, and the light sensitive sensor.”), comprising: ray tracing planes of objects to produce plane images ([Section III. B] reciting “Physically-based rendering serves the applications where rendering time and computational complexity are not the critical issues but the realism matters. As its name implies, the ultimate goal in realistic rendering is to synthesize images which look very similar to real images. To this end, these methods attempt to physically simulate multiple interactions of light and matter, which essentially compute a solution to the light transport equation (LTE).”; [See also Fig. 2]); PNG media_image1.png 338 958 media_image1.png Greyscale and a location in three-dimensional space for each of the objects ([Section III] reciting “To synthesize an image, generally two pieces of information are required: the visibility, referring to which object is visible from a given image location, and the shading, the color corresponding to the location.”); estimating a point spread function from the ray tracing ([Section V] reciting “The first part is in common with all photorealistic rendering approaches and concerns utilizing a proper ray-tracing approach. In the rest of this section, we elaborate the optics and the sensor simulation components.”; [Section V. B] reciting “In Fourier optics, an optical system is modeled as a linear system with a point spread function (PSF) [30]. Thus, the response of the optic can be simulated by a convolution, or equivalently, a multiplication in the frequency domain.”) generating synthetic three-dimensional rendered images based on the dataset (See Fig. 11); PNG media_image2.png 567 965 media_image2.png Greyscale generating synthetic imagery from the plane images ([Section IV] reciting “The most important aspect in synthesizing images of a measurement system is to generate reliable images.”) incorporating optical effects into the synthetic imagery generated ([Section IV] reciting “To physically simulate the image formation process in a machine vision system, at least two more components need to be physically simulated in addition to the light transport simulation: the imaging optics, including relevant optical effects such as diffraction and aberrations, and the light sensitive sensor.”). rendering one or more photo-realistic scenes on the synthetic imagery generated ([Section VI: Similation Result; Fig. 11] reciting “A cylinder head in a laser triangulation inspection. (left): real camera image, (right): simulated image using the proposed sensor-realistic simulation framework.”; See Fig. 11 below); and 55 Mohammadikaji does not explicitly teach feeding a ray-tracing simulator with one or more synthetic impulse images or scenes including one or more objects (although Mohammadikaji could teach this limitation ([Section IV] reciting “To metrologically evaluate a machine vision setup, such as in terms of the measurement uncertainty [27], the provided synthetic images must be physically-based with realistic prediction of the average intensities and the image noise. Noise in real imaging systems appears due to unwanted optical effects or as a result of the electronic and photon noise. To physically simulate the image formation process in a machine vision system, at least two more components need to be physically simulated in addition to the light transport simulation”), Yang can teach it further), wherein the synthetic impulse images or scenes include a label for each of the objects and a location in three-dimensional space for each of the objects; estimating a point spread function from the ray tracing of the plane images by generating a dataset of a two-dimensional point spread function per each depth over the three-dimensional space; … creating one or more perceived images based on the dataset; feeding the perceived images into a perception pipeline; generating synthetic imagery from the plane images and the perceived images; providing an actual imaging sensor relative to a location on a vehicle; … and training a neural network for a vision system on a vehicle to identify the objects in the photo-realistic scenes based on the label for each of the objects and the location in three-dimensional space for each of the objects. 56 Yang teaches feeding a ray-tracing simulator with one or more synthetic impulse images or scenes including one or more objects ([Fig. 2] reciting “A MLP network is trained to represent the PSF for different positions and focus distances. We use ray tracing to calculate accurate PSF as the ground truth. The network takes as input the object positions (x,y,z) and focus distance fd, and produces a 2D matrix as output.”; [Section III B.; Titled “Aberration Simulator”] reciting “Our contribution lies in the accurate simulation of aberrations for the training data to improve on the classical thin lens model. The PSF characterizes the optical lens response to a point source of light. We can convolve the per-pixel PSF with the object image to simulate the image captured by a camera.”), and a location in three-dimensional space for each of the objects; estimating a point spread function from the ray tracing of the plane images by generating a dataset of a two-dimensional point spread function per each depth over the three-dimensional space ([Section IV. A] reciting “We train the PSF network for 400,000 iterations to overfit the lens. In each iteration, we randomly focus the lens to a distance of fd, and uniformly select 256 points in object space for training. The ground-truth PSFs are computed by tracing 1024 rays from each object point, and (x,y,z,fd) coordinates are provided to the network as input. We use a wavelength of 589 nm and set the PSF size to 11×11 sensor pixels.”; [Section IV. B] reciting “For comparison purposes, we also tested the low-rank PSF estimation model described in [43], [47]. In this model, we use ray tracing to calculate the PSF of the surrounding 8 positions and employ trilinear interpolation to obtain the center PSF. We divide the object space into 20 depths, with 64 grids in each depth plane. The PSFs of these positions are calculated and used for querying. As depicted in Fig. 3, the low-rank PSF model can estimate PSFs similar to the ground truth. Since the PSF is slowly varying in the object space, using enough sampled PSFs for querying can yield promising results.”); … creating one or more perceived images based on the dataset; feeding the perceived images into a perception pipeline; generating synthetic imagery from the plane images and the perceived images (See Fig. 1]); and PNG media_image3.png 274 951 media_image3.png Greyscale training a neural network for a vision system on a vehicle to identify the objects in the photo-realistic scenes and the location in three-dimensional space for each of the objects ([Section I.] reciting “To overcome the domain gap resulting from optical aberrations, we introduce aberration-aware training (AAT) that enables the network to learn these optical aberrations during the training. Our AAT method consists of a lightweight point spread function (PSF) network and a re-rendering process to simulate aberrated training images… We propose a lightweight network that can represent the PSF of a real lens at different focus distances and object positions. This PSF network can then simulate aberrated and realistic images for aberration-aware training.”; [Section IV. A] reciting “We train the PSF network for 400,000 iterations to overfit the lens. In each iteration, we randomly focus the lens to a distance of fd, and uniformly select 256 points in object space for training.”) 57 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Mohammadikaji) to incorporate the teachings of Yang to provide a method that incorporates a type of simulator, a specific type of a point spread function, and a neural network for ray-tracing the synthetic images from the teachings of Mohammadikaji. Doing so would enable the network to learn to extract more accurate image features in the presence of optical aberrations as stated by Yang ([Section VII.] recited). 58 Mohammadikaji in view of Yang does not explicitly teach … wherein the synthetic impulse images or scenes include a label for each of the objects and a location in three-dimensional space for each of the objects; … providing an actual imaging sensor relative to a location on a vehicle; … and training a neural network for a vision system on a vehicle to identify the objects in the photo-realistic scenes based on the label for each of the objects and the location in three-dimensional space for each of the objects. 59 Carlson teaches … wherein the synthetic impulse images or scenes include a label for each of the objects and a location in three-dimensional space for each of the objects; … and training a neural network for a vision system on a vehicle to identify the objects in the photo-realistic scenes based on the label for each of the objects and the location in three-dimensional space for each of the objects ([Section 3.6 “Generating Augmented Training Data”] reciting “We use the original image labels as the labels for the augmented data. Pixel artifacts from cameras, like chromatic aberration and blur, make the object boundaries noisy. Thus, the original target labels are used to ensure that the network makes robust and accurate predictions in the presence of camera effects”). 60 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Mohammadikaji in view of Yang) to incorporate the teachings of Carlson to provide labels for the synthetic images/objects for the aberration process and PSF networks provided by the teachings of Mohammadikaji in view of Yang. Doing so would ensure that the network makes robust and accurate predictions in the presence of camera effects as stated by Carlson ([Section 3.6] recited). 61 Mohammadikaji in view of Yang and Carlson does not explicitly teach providing an actual imaging sensor relative to a location on a vehicle; 62 Morrical teaches providing an actual imaging sensor relative to a location on a vehicle ([0168] reciting “Such components can be used to render images using ray tracing-based importance sampling, which can be accelerated through hardware.”; [0182] reciting “…TPU(s) may be accelerators configured to, and optimized for, performing image processing functions (e.g., for CNNs, RCNNs, etc.)… a CNN for object identification and detection using data from camera sensors; a CNN for distance estimation using data from camera sensors; a CNN for emergency vehicle detection and identification and detection using data from microphones 1696; a CNN for facial recognition and vehicle owner identification using data from camera sensors; and/or a CNN for security and/or safety related events.”; [0201] reciting “In at least one embodiment, processor(s) 1610 may include a video image compositor that may be a processing block (e.g., implemented on a microprocessor) that implements video post-processing functions needed by a video playback application to produce final image for player window. In at least one embodiment, video image compositor may perform lens distortion correction on wide-view camera(s) 1670, surround camera(s) 1674, and/or on in-cabin monitoring camera sensor(s). In at least one embodiment, in-cabin monitoring camera sensor(s) are preferably monitored by a neural network running on another instance of SoC(s) 1604, configured to identify in cabin events and respond accordingly. In at least one embodiment, an in-cabin system may perform, without limitation, lip reading to activate cellular service and place a phone call, dictate emails, change vehicle's destination, activate or change vehicle's infotainment system and settings, or provide voice-activated web surfing. In at least one embodiment, certain functions are available to driver when vehicle is operating in an autonomous mode and are disabled otherwise.”); 63 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Mohammadikaji in view of Yang and Carlson) to incorporate the teachings of Morrical to provide a method that utilizes a type of image sensor that can be used for vehicles, which the sensor can utilize ray tracing technology similar to what was taught by Mohammadikaji in view of Yang and Carlson. Doing so would allow the sensor to be configured to identify in-cabin events and respond accordingly as stated by Morrical ([0201] recited). 64 Regarding claim 16, Mohammadikaji in view of Yang, Carlson, and Morrical teaches the method of claim 15 (see claim 15 rejection above), wherein the imaging sensor of the vehicle is located within a cabin of the vehicle (Morrical; [0201] reciting “In at least one embodiment, processor(s) 1610 may include a video image compositor that may be a processing block (e.g., implemented on a microprocessor) that implements video post-processing functions needed by a video playback application to produce final image for player window. In at least one embodiment, video image compositor may perform lens distortion correction on wide-view camera(s) 1670, surround camera(s) 1674, and/or on in-cabin monitoring camera sensor(s). In at least one embodiment, in-cabin monitoring camera sensor(s) are preferably monitored by a neural network running on another instance of SoC(s) 1604, configured to identify in cabin events and respond accordingly. In at least one embodiment, an in-cabin system may perform, without limitation, lip reading to activate cellular service and place a phone call, dictate emails, change vehicle's destination, activate or change vehicle's infotainment system and settings, or provide voice-activated web surfing. In at least one embodiment, certain functions are available to driver when vehicle is operating in an autonomous mode and are disabled otherwise.”). 65 Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over M. Mohammadikaji, S. Bergmann, J. Beyerer, J. Burke and C. Dachsbacher, "Sensor-Realistic Simulations for Evaluation and Planning of Optical Measurement Systems With an Application to Laser Triangulation," in IEEE Sensors Journal, vol. 20, no. 10, pp. 5336-5349, 15 May15, 2020, doi: 10.1109/JSEN.2020.2971683 (hereinafter Mohammadikaji) in view of Yang, X., Fu, Q., Elhoseiny, M., & Heidrich, W. (2023). Aberration-aware depth-from-focus. IEEE Transactions on Pattern Analysis and Machine Intelligence (hereinafter Yang), Carlson, A., Skinner, K. A., Vasudevan, R., & Johnson-Roberson, M. (2018). Modeling camera effects to improve visual learning from synthetic data. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops (pp. 0-0) (hereinafter Carlson), and Morrical et al. (US 20230066636 A1) as of claim 13, further in view of Curzan et al. (US 20230041139 A1). 66 Regarding claim 14, Mohammadikaji in view of Yang, Carlson, and Morrical teaches the non-transitory computer-readable storage medium on which is recorded instructions of claim 13 (see claim 13 rejection above), but does not explicitly teach generating the point spread function sampled at multiple locations in one or more frames. 67 Curzan teaches generating the point spread function sampled at multiple locations in one or more frames ([0081] reciting “A point spread function (PSF) blur is determined based on a PSF of a lens for an infrared detection system, at 802. The PSF blur and infrared data may be optionally up-sampled, at 804. In some embodiments, the determination of the PSF blur at 802 and the up-sampling of the PSF blur may be determined once for an infrared detection system. In some embodiments, the up-sampled PSF and infrared data pixel values may be interpolated using linear, cubic, or other method … the data may be compressed and/or otherwise processed. In some embodiments, at least a portion of the method is repeated for each frame of an infrared video. For example, 804 (if used), 806, 808 (if used) and 810 may be repeated for additional frames.”). 68 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Mohammadikaji in view of Yang, Carlson, and Morrical) to incorporate the teachings of Curzan to provide a method that can utilize the point spread function taught by Mohammadikaji in view of Yang, Carlson, and Morrical to sample many locations in more frames as taught by Curzan. Doing so would allow the viewing to be less visually pixelated to an observer as stated by Curzan ([0081] recited). 69 Claim(s) 17 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over M. Mohammadikaji, S. Bergmann, J. Beyerer, J. Burke and C. Dachsbacher, "Sensor-Realistic Simulations for Evaluation and Planning of Optical Measurement Systems With an Application to Laser Triangulation," in IEEE Sensors Journal, vol. 20, no. 10, pp. 5336-5349, 15 May15, 2020, doi: 10.1109/JSEN.2020.2971683 (hereinafter Mohammadikaji) in view of Yang, X., Fu, Q., Elhoseiny, M., & Heidrich, W. (2023). Aberration-aware depth-from-focus. IEEE Transactions on Pattern Analysis and Machine Intelligence (hereinafter Yang), Carlson, A., Skinner, K. A., Vasudevan, R., & Johnson-Roberson, M. (2018). Modeling camera effects to improve visual learning from synthetic data. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops (pp. 0-0) (hereinafter Carlson), and Morrical et al. (US 20230066636 A1) as of claims 15-16, further in view of Lehmann, M., Wittpahl, C., Zakour, H. B., & Braun, A. (2019). Modeling realistic optical aberrations to reuse existing drive scene recordings for autonomous driving validation. Journal of Electronic Imaging, 28(1), 013005-013005 (hereinafter Lehmann). 70 Regarding claim 17, Mohammadikaji in view of Farivar and Morrical teaches the method of claim 16 (see claims 15-16 rejections above), but does not explicitly teach wherein the neural network is not trained from data collected on test-drives due to the synthetic imagery generated. 71 Lehmann teaches wherein the neural network is not trained from data collected on test-drives due to the synthetic imagery generated ([Abstract] reciting “The numerical basis for this model is a nonlinear regression of the PSF with an artificial neural network. The novelty lies in the portability and the parameterization of this model…With this model, it is possible to reuse existing recordings, with the potential to avoid millions of test drive miles.”; [Section 1] reciting “…and then convolving with a different set of optical properties would dramatically reduce the number of necessary test drives, while at the same time increasing testing depth and function robustness.”; [Section 2.2] reciting “For this work, we selected just a single lens, meaning that the training of the neural network is based solely on the measurement data of one lens specimen”; [See also Fig. 10]). PNG media_image4.png 473 618 media_image4.png Greyscale 72 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Mohammadikaji in view of Yang, Carlson, and Morrical) to incorporate the teachings of Lehmann to provide a confirmation wherein test drives are not needed using the image sensors in certain image sensors and neural networks taught by Mohammadikaji in view of Yang, Carlson, and Morrical. Doing so would have a valid method to validate the functional and safety limits of camera-based advanced driver assistance systems as stated by Lehmann ([Abstract] recited). 73 Regarding claim 19, Mohammadikaji in view of Yang, Carlson, Morrical, and Lehmann teaches the method of claim 17 (see claim 15-17 rejections above), further comprising: implementing a blurring effect; and implementing a noise tradeoff (Mohammadikaji; [Section V. A; 1b)] reciting “A familiar aberration is defocus, in which the center of the spherical wave at the exit pupil is either in front or behind the image plane, yielding a blurred image.”; [Section IV] reciting “Noise in real imaging systems appears due to unwanted optical effects or as a result of the electronic and photon noise. To physically simulate the image formation process in a machine vision system, at least two more components need to be physically simulated in addition to the light transport simulation: the imaging optics, including relevant optical effects such as diffraction and aberrations, and the light sensitive sensor. Although sensor simulation is not a totally new concept in computer graphics, this paper proposes using an industry standard model for simulating the sensors and their corresponding intensity noise, which can cover a wide range of imaging sensors in the market.”). 74 Claim(s) 22-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over M. Mohammadikaji, S. Bergmann, J. Beyerer, J. Burke and C. Dachsbacher, "Sensor-Realistic Simulations for Evaluation and Planning of Optical Measurement Systems With an Application to Laser Triangulation," in IEEE Sensors Journal, vol. 20, no. 10, pp. 5336-5349, 15 May15, 2020, doi: 10.1109/JSEN.2020.2971683 (hereinafter Mohammadikaji) in view of Yang, X., Fu, Q., Elhoseiny, M., & Heidrich, W. (2023). Aberration-aware depth-from-focus. IEEE Transactions on Pattern Analysis and Machine Intelligence (hereinafter Yang) and Carlson, A., Skinner, K. A., Vasudevan, R., & Johnson-Roberson, M. (2018). Modeling camera effects to improve visual learning from synthetic data. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops (pp. 0-0) (hereinafter Carlson) as of claims 1 and 20-21, further in view of Kawamata et al. (US 20040175053 A1). 75 Regarding claim 22, Mohammadikaji in view of Yang and Carlson teaches the method of claim 21 (see claims 1 and 20-21 rejections above), but does not explicitly teach wherein filtering the groups of pixels is based on a location of an imaging sensor of the vision system relative to a vehicle. 76 Kawamata teaches wherein filtering the groups of pixels is based on a location of an imaging sensor of the vision system relative to a vehicle ([0012] reciting “Furthermore, in the foregoing image pickup apparatus, a pixel brightness value of the picked-up image of a range of at least a predetermined distance from the image pickup portion may become relatively greater than a pixel brightness value of the picked-up image of a range of less than the predetermined distance from the image pickup portion, due to the spatial filter process performed by the image processing portion.”; [0051] reciting “In an image pickup system 50, the image pickup apparatus 1 is used as a means for taking pictures outside the vehicle. The image pickup apparatus 1 is disposed within the vehicle's passenger compartment to take images of the view outside the vehicle via the glass 51. The image pickup apparatus 1 employed in the system is, for example, an apparatus having sensitivity to near-infrared radiation. By disposing a visible light cutoff filter in the picture-taking optical system of the image pickup portion 2 of the image pickup apparatus 1, picture acquisition based mainly on a near-infrared component becomes possible.”). 77 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Mohammadikaji in view of Yang and Carlson) to incorporate the teachings of Kawamata to provide the imaging sensor for filtering pixels as taught by Mohammadikaji in view of Yang and Carlson to be based on a specific location in a vehicle. Doing so would assist a vehicle's driver in visual recognition during a nighttime drive as stated by Kawamata et al ([0050] recited). 78 Regarding claim 23, Mohammadikaji in view of Yang, Carlson, and Kawamata teaches the method of claim 22 (see claims 1 and 20-22 rejections above), wherein the imaging sensor is located within a passenger compartment of the vehicle ([0051] reciting “In an image pickup system 50, the image pickup apparatus 1 is used as a means for taking pictures outside the vehicle. The image pickup apparatus 1 is disposed within the vehicle's passenger compartment to take images of the view outside the vehicle via the glass 51. The image pickup apparatus 1 employed in the system is, for example, an apparatus having sensitivity to near-infrared radiation. By disposing a visible light cutoff filter in the picture-taking optical system of the image pickup portion 2 of the image pickup apparatus 1, picture acquisition based mainly on a near-infrared component becomes possible.”). 79 Regarding claim 24, Mohammadikaji in view of Yang, Carlson, and Kawamata teaches the method of claim 22 (see claims 1 and 20-22 rejections above), wherein generating synthetic three-dimensional rendered images (Mohammadikaji; See Fig. 11) includes creating one or more perceived images (Yang; See Fig. 1). PNG media_image2.png 567 965 media_image2.png Greyscale PNG media_image3.png 274 951 media_image3.png Greyscale 80 Claim(s) 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over M. Mohammadikaji, S. Bergmann, J. Beyerer, J. Burke and C. Dachsbacher, "Sensor-Realistic Simulations for Evaluation and Planning of Optical Measurement Systems With an Application to Laser Triangulation," in IEEE Sensors Journal, vol. 20, no. 10, pp. 5336-5349, 15 May15, 2020, doi: 10.1109/JSEN.2020.2971683 (hereinafter Mohammadikaji) in view of Yang, X., Fu, Q., Elhoseiny, M., & Heidrich, W. (2023). Aberration-aware depth-from-focus. IEEE Transactions on Pattern Analysis and Machine Intelligence (hereinafter Yang) and Carlson, A., Skinner, K. A., Vasudevan, R., & Johnson-Roberson, M. (2018). Modeling camera effects to improve visual learning from synthetic data. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops (pp. 0-0) (hereinafter Carlson), and Kawamata et al. (US 20040175053 A1) as of claims 1 and 20-22, further in view of Jia et al. (US 20220198611 A1). 81 Regarding claim 25, Mohammadikaji in view of Yang, Carlson, and Kawamata teaches the method of claim 22 (see claims 1 and 20-22 rejections above), further comprising: implementing a blurring effect including at least one aberration (Mohammadikaji; [Section V. A; 1b)] reciting “Aberrations are generally either chromatic or monochromatic [29]. Chromatic aberrations are caused by the varying refractive index of a lens for different wavelengths. Monochromatic aberrations can be caused by the lens geometry, lens misconfiguration, or they can be due to the oversimplifying assumptions typically made for optical derivations. A typical example of the latter is the paraxial approximation, in which one assumes that rays entering the optical system hold a very small angle to the optical axis. Rays not satisfying this assumption cause aberrations. Typically, choosing a wider aperture strengthens the aberration effects, mainly because more non-paraxial rays enter the optical system. A familiar aberration is defocus, in which the center of the spherical wave at the exit pupil is either in front or behind the image plane, yielding a blurred image.”); and implementing a noise tradeoff (Mohammadikaji; [Section IV] reciting “Noise in real imaging systems appears due to unwanted optical effects or as a result of the electronic and photon noise. To physically simulate the image formation process in a machine vision system, at least two more components need to be physically simulated in addition to the light transport simulation: the imaging optics, including relevant optical effects such as diffraction and aberrations, and the light sensitive sensor. Although sensor simulation is not a totally new concept in computer graphics, this paper proposes using an industry standard model for simulating the sensors and their corresponding intensity noise, which can cover a wide range of imaging sensors in the market.”) 82 Mohammadikaji in view of Yang, Carlson, and Kawamata does not explicitly teach implementing a noise tradeoff including increasing an information rate, a signal-to-noise ratio and an allocated bandwidth. 83 Jia teaches including increasing an information rate, a signal-to-noise ratio and an allocated bandwidth ([Abstract] reciting “In a method of filtering an image from data received from a CMOS camera, image data is loaded by a computational device from the camera.”; [0024] reciting “This is even more relevant in sCMOS sensors, where the increased signal capacity and much lower readout noise comes at the expenses of the fixed pattern noise due to pixel gain fluctuations.”; [0049] reciting “If we call H(f) the modulation transfer function (MTF) of the system, we have that: … where N.sub.0 is a constant value that represents the noise power per unit bandwidth.”; [0071] reciting “We validated the performance of ACsN under various sampling rates normally adopted for fluorescence microscopy. In practice, a sampling rate close to the Nyquist criterion represents a good tradeoff between signal to noise ratio (SNR) and detail preservation. Here, examining numerically and experimentally across a wide range of sampling rates, we demonstrated the viability of ACsN for low SNR with over-sampling and no noticeable loss of signals with under-sampling.”). 84 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Mohammadikaji in view of Yang, Carlson, and Kawamata) to incorporate the teachings of Jia to provide additional elements like bandwidth, signal to noise ratio, and a type of information rate for the noise tradeoff teachings of Mohammadikaji in view of Yang, Carlson, and Kawamata. Doing so would allow detail preservation as stated by Jia ([0071] recited). Conclusion 85 Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 86 Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHNNY TRAN LE whose telephone number is (571)272-5680. The examiner can normally be reached Mon-Thu: 7:30am-5pm; First Fridays Off; Second Fridays: 7:30am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHNNY T LE/Examiner, Art Unit 2614 /KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Jan 26, 2024
Application Filed
Sep 04, 2025
Non-Final Rejection — §103, §112
Dec 01, 2025
Applicant Interview (Telephonic)
Dec 01, 2025
Examiner Interview Summary
Dec 08, 2025
Response Filed
Feb 19, 2026
Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
0%
With Interview (-66.7%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month