Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statements filed 8/4/2023 and 1/29/2024 have been considered by the examiner. Drawings The drawings filed 8/4/2023 are approved by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b ) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the appl icant regards as his invention. Claims 10 and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The neural network set forth by claims 10 and 20 does not have antecedent basis in parent claims 1 and 11. It is assumed that the neural network is part of the learning algorithm. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim s 1-5, 7-15 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Sano et al (United States Patent Application Publication No. 2018/0054581) in view of Kim et al (United States Patent Application Publication No. 2018/0054581) . With respect to claim 1, Sano et al disclose: A configuration control circuitry for a time-of-flight system , the time-of-flight system comprising an illumination unit configured to emit light to a scene [ taught by the infrared light emitting unit in figure 2 ] and an imaging unit configured to generate image data representing a time-of-flight measurement of light reflected from the scene [ taught by the solid state imaging apparatus (20) in figure 2; figure 6; and paragraphs [0086] to [0091] ] , the configuration control circuitry being configured to : obtain the image data from the imaging unit and depth data representing a depth map of the scene, wherein the depth data is generated based on the image data [ the solid state imaging apparatus (20) provides both depth (paragraphs [0086] to [0091] and image (figures 7 and 8 data ] ; determine a set of configuration parameters for at least one of the illumination unit and the imaging unit, wherein the set of configuration parameters is determined with a learning algorithm, wherein the learning algorithm is based on a first sub-module and a second sub-module, wherein the first sub-module is configured to estimate, based on the obtained image data and the obtained depth data, a measurement indicator of the depth map, wherein the second sub-module is configured to estimate, based on the estimated measurement indicator, the set of configuration parameters for improving a subsequent time-of-flight measurement. Sano et al does not teach configuration control circuitry arranged to determine a set of configuration parameters for at least one of the illumination unit and the imaging unit, wherein the set of configuration parameters is determined with a learning algorithm, wherein the learning algorithm is based on a first sub-module and a second sub-module, wherein the first sub-module is configured to estimate, based on the obtained image data and the obtained depth data, a measurement indicator of the depth map, wherein the second sub-module is configured to estimate, based on the estimated measurement indicator, the set of configuration parameters for improving a subsequent time-of-flight measurement. With respect to this difference, Kim et al disclose: a processor using machine learning configured to determine a set of configuration parameters for at least one of the illumination unit and the imaging unit [ the abstract teaches determining optimal values for parameters used in operation of an image processor ] , wherein the set of configuration parameters is determined with a learning algorithm [ the device used machine learning ] , wherein the learning algorithm is based on a first sub-module and a second sub-module [ note, a first and second module reads on individual process steps performed by a processor ] , wherein the first sub-module is configured to estimate, based on the obtained image data and the obtained depth data, a measurement indicator of the depth map [ the abstract states , “… inputting initial values for the plurality of parameters to a machine learning model having an input layer, corresponding to the plurality of parameters, and an output layer corresponding to a plurality of evaluation items extracted from a result image generated by the image signal processor…” ] , wherein the second sub-module is configured to estimate, based on the estimated measurement indicator, the set of configuration parameters for improving a subsequent time-of-flight measurement [ the abstract states , “… obtaining evaluation scores for the plurality of evaluation items using an output of the machine learning model; adjusting weights, applied to the plurality of parameters, based on the evaluation scores; and determining the optimal values using the adjusted weights…” ]. From the above, Kim et al teaches that it was known before the effective filing date of the present application to have used machine learning to extract image date and secondly evaluated the data to determine operational parameters of an imaging device. Therefore, it would have been obvious for a person of ordinary skill in the art to have had a reasonable expectation of success in using machine learning, as taught by Kim et al, to have configured the parameters of the combined image and depth system of Sano et al, when seeking to optimize device performance. Claim 11 is rejected by the combination of Sano et al and Kim et al, as applied to claim 1. With regard to claims 2 and 12, paragraph [0047] of Kim et al states, “…The image signal processor 121 may adjust a plurality of parameters associated with the raw data and signal-process the raw data according to the adjusted parameters to generate a result image. The parameters may include two or more of color, blurring, sharpness, noise , a contrast ratio, resolution, and a size…” . Therefore, claims 2 and 12 are met by the combination of Sano et al and Kim et al, as applied to claims 1 and 11. With regard to claims 3 and 13, output power, illumination pattern and wavelength are all operational parameters that would have affected the image and depth values produced by the device of Sano et al. Therefore, claims 3 and 13 would have been an obvious reasonable expectation produced by the combination of Sano et al and Kim et al because Kim et al taught using machine learning to determine operational parameters of imaging devices. With regard to claims 4 and 14, figure 6 of Sano et al teaches using a range signal with a modulation frequency and duty cycle. Therefore, claims 4 and 14 are met by the combination of Sano et al and Kim et al, as applied to claims 1 and 11, because figure 6 shows operational parameters. With regard to claims 5 and 15, collecting the charge (701 and 702) in figure 6 of Sano et al would have required integration, thus rendering these claims obvious as a reasonable expectation of an ordinary skilled engineer in processing charge for an array of detectors. The device of Sano et al measures Z axis range across an 2D array, thus meeting claims 7 and 17. Therefore, claims 7 and 17 are met by the combination of Sano et al and Kim et al, as applied to claims 1 and 11. The abstract of Kim et al teaches using initial values for the plurality of parameters for the machine learning model, thus teaching estimat ing a set of configuration parameters further based on a set of predetermined configuration parameters of at least one of the illumination unit and the imaging unit , when the teaching of Kim et al is applied to the solid state imaging device of Sano et al. Therefore, claims 8, 9, 18 and 19 are met by the combination of Sano et al and Kim et al, as applied to claims 1 and 11. A neural network is an inherent part of machine learning. Therefore, claims 10 and 20 are met by the combination of Sano et al and Kim et al, as applied to claims 1 and 11, because the teaching of Kim et al applied to Sano et al would have operated on real time-of-flight data. Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Sano et al (United States Patent Application Publication No. 2018/0054581) in view of Kim et al (United States Patent Application Publication No. 2018/0054581), as applied to claim s 1 and 11 above, and further in view of Na et al (2022/0050206). Paragraph [0031] of Na et al teaches that it was known before the effective filing date of the present application that indirect and direct time of flight detection was interchangeable. Therefore, it would have been obvious for a person of ordinary skill in the art to have had a reasonable expectation of success in modifying the combination of Sano et al and Kim et al to have used direct time of flight detection because Na at al taught that this type of detection produced the same result. Any inquiry concerning this communication should be directed to FILLIN "Insert the name of the examiner designated to be contacted first regarding inquiries about the Office action." \* MERGEFORMAT MARK HELLNER at telephone number FILLIN "Insert the individual area code and phone number of the examiner to be contacted." \* MERGEFORMAT (571)272-6981 . Examiner interviews are available via a variety of formats. See MPEP § 713.01. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. /MARK HELLNER/ Primary Examiner, Art Unit 3645