Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Detailed Action The following non-final action is in response to application 18 / 543 , 876 filed on 12/18/2023. The communication is the first action on the merits. Status of Claims Claims 1-14 are currently pending and have been rejected as follows. Drawings The drawings filed on 12/18/2023 are accepted. Foreign Priority Applicant’s claim to foreign priority has been acknowledged and the corresponding documents have been received. IDS The IDS has been received, and the documents within it have been considered. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1- 14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. A subject matter eligibility analysis is set forth below. See MPEP 2106. Claim 1 recites: A method for processing measurement data which are present as a point cloud of points in space, wherein the point cloud assigns values of one or more measured variables to each point , with regard to a predetermined task, the method comprising the following steps: for each respective measured variable of the one or more measured variables, collecting and processing all values of the respective measured variable that are assigned to points of the point cloud to form an aggregated representation , wherein the representation has the same dimensionality irrespective of how many points of the point cloud are assigned values of the respective measured variable; feeding one or more of the representations as inputs to a task network; and mapping, by the task network, the one or more representations to a required output with regard to the predetermined task. The bolded language in the claim limitations indicate abstract ideas, and the remaining limitations are considered to be additional elements. Under Step 1 of the analysis, claim 1 does belong to a statutory category, namely it is a process claim. Claim s 13-14 are a machine claim s . Under Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. Under Step 2A, Prong One, the broadest reasonable interpretation consistent with the specification of the limitations recited in Claim 1 recite at least one judicial exception, that being a mental process (observations/evaluation/judgement/ or opinion) . and a mathematical concept (mathematical calculations/relationships/formulas/ or equations). According to the specification, “ the point cloud assigning values of one or more measured variables to each point ” involves a histogram that assigns the number of points 1 a - 1 d with values 2 a #- 2 c # of the measured variable 2 a - 2 c in these value ranges to value ranges of the measured variable 2 a - 2 c [Fig. 1, 0037]. This claim limitation involves mathematical relationships given that a histogram is doing the assigning process by matching points with values in specific value ranges of the measured variables . The assigning process is also based in spatial interpolation, coordinate transformations, statistics, linear algebra , etc … and thus involves mathematical calculations. According to the specification, “ for each respective measured variable of the one or more measured variables … processing all values of the respective measured variable that are assigned to points of the point cloud to form an aggregated representation ” involves step 120 where values 2 a #- 2 c # are processed into an aggregated representation 3 a - 3 c of the relevant measured variable 2 a - 2 c [0036, Fig. 1]). The assigning process is also based in spatial interpolation, coordinate transformations, statistics, and linear algebra and thus involves mathematical calculations and relationships. Furthermore, the step 120 in Fig. 1 involves processing via a histogram and an adjustment of the parameterized distribution function, which involve mathematical calculations and relationships. According to the specification, “ mapping, by the task network, the one or more representations to a required output with regard to the predetermined task ” involves processing test point clouds and/or validation point clouds and using deviation as feedback in step 140. In addition, a ccording to block 141, when optimizing a hyperparameter, test point clouds 1 ′ and/or validation point clouds 1 ″ can be processed into outputs 5 for each value of the hyperparameter. According to block 142, a deviation of the outputs 5 obtained in this way from target outputs 5 ′, 5 ′), with which the test point clouds 1 ′ or validation point clouds 1 ″ are labeled, can then be used as feedback for optimizing the hyperparameter [Fig. 1, 0044]. Performing deviations on outputs involves a mathematical calculation. Claims 13-14 recite similar abstract ideas rejected in the claim 1 analysis. Step 2A, Prong Two of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception(s) into a practical application of the exception. This evaluation is performed by (a) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (b) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. 2019 PEG Section III(A)(2), 84 Fed. Reg. at 54-55. The additional elements in the preambles of all independent claims are recited in generality and represent insignificant extra-solution activity ( field-of-use limitations ) that is not meaningful to indicate a practical application. Claim 1 recites additional elements: “ collecting … all values of the respective measured variable ” “ feeding one or more of the representations as inputs to a task network ” Claims 13-14 recite similar additional elements. These claim limitations generically recite collecting/outputting by sensors/devices measurement data (all independent claims), which represents the insignificant extra-solution activity of mere data gathering/outputting results. According to the October update on 2019 SME Guidance such steps are “performed in order to gather data for the mental analysis step, and is a necessary precursor for all uses of the recited exception. It is thus extra-solution activity, and does not integrate the judicial exception into a practical application”. Claims 13-14 also recite the additional elements: “ A machine-readable data carrier ” “A computer program including machine readable instructions ” “ One or more computers and/or compute instances ” These additional elements are computer components recited in generality and not meaningful and, therefore, are not qualified as particular machines to indicate a practical application. Under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. When re-evaluated under Step 2B, the claim limitations are found to be well-understood, routine, and conventional as explained by MPEP 2106.05(d)(II) (describing conventional activities that include transmitting and receiving data over a communication network ) as referenced by Zhou and Caspers . Therefore, the combination and arrangement of the above identified additional elements when analyzed under Step 2B also fails to necessitate a conclusion that claim s 1 and 13-14 amount to significantly more than the abstract idea. With regards to dependent claims 2- 12 , they provide additional features/steps which are part of an expanded abstract idea of the independent claims (additionally comprising abstract idea steps) and, therefore, these claims are not eligible without meaningful additional elements that reflect a practical application and/or additional elements that qualify for significantly more for substantially similar reasons as discussed with regards to Claim 1. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1 and 4-14 are rejected under 35 U.S.C. 102 as being anticipated by Zhou ( US 20220058858 A1 ). Regarding claim 1 , Zhou teaches a method for processing measurement data which are present as a point cloud of points in space [ Zhou : Abstract] , wherein the point cloud assigns values of one or more measured variables to each point, with regard to a predetermined task, the method comprising the following steps : ( … a system … that processes point cloud data representing a sensor measurement of a scene captured by one or more sensors to generate an object detection output that identifies locations of one or more objects in the scene [0012]. T he sensor data 122 includes point cloud data that characterizes the latest state of an environment (i.e., an environment at the current time point) …a point cloud can define the shape of some real or synthetic physical system, where each point in the point cloud is defined by three values representing respective coordinates in the coordinate system, e.g., (x, y, z) coordinates … each point in the point cloud can be defined by more than three values, wherein three values represent coordinates in the coordinate system and the additional values each represent a property of the point of the point cloud, e.g., an intensity of the point in the point cloud [0016] ; This allows for the neural network that processes the representation to generate task outputs, e.g., object detection or object classification outputs … Additionally, this specification describes a multi-view fusion architecture that can encode point features with more discriminative context information extracted from the different views, e.g., a birds-eye view and a perspective view, resulting in more accurate predictions being generated by the task neural network [0005]; for each respective measured variable of the one or more measured variables, collecting and processing all values of the respective measured variable that are assigned to points of the point cloud to form an aggregated representation, ( The system generates, for each of one or more views of the scene, a corresponding dynamic voxel representation that assigns each of the three-dimensional points in the point cloud data to a respective voxel of a variable number of voxels (step 204) … the dynamic voxel representation does not have a fixed number of voxels or a fixed number of points per voxel. Instead, the dynamic voxel representation has a variable number of voxels, i.e., has different numbers of voxels for different sets of three-dimensional points, and a variable number of points per voxel. Moreover, the dynamic voxel representation defines a bi-directional mapping between voxels and points in the set of three-dimensional points, i.e., all of the points in the point cloud data are included in one of the voxels and no points are discarded [0025]); wherein the representation has the same dimensionality irrespective of how many points of the point cloud are assigned values of the respective measured variable ; ( Because the dimensions K and T are fixed, the resulting representation is always the same size [0043, Fig. 3: Voxel Partition]); feeding one or more of the representations as inputs to a task network ; ( The system then generates a network input from the dynamic voxel representations corresponding to each of the one or more views (step 206) [0028]); and mapping, by the task network, the one or more representations to a required output with regard to the predetermined task ( The system can then process the voxel-level feature map for the view using a convolutional neural network (“convolution tower”) to generate the voxel feature representations (“context features”) for each of the voxels in the view [0060] ; The system can train each of the neural network components described with reference to FIG. 4 jointly with the task neural network on ground truth object detection outputs for point clouds in a set of training data. For example, when the task is object detection, the loss function used for the training of these neural networks can be an object detection loss that measures the quality of object detection outputs generated by these neural networks relative to the ground truth object detection outputs, e.g., smoothed losses for regressed values and cross entropy losses for classification outputs. Particular of examples of loss functions that can be used for the training …[0068]). Regarding claim 4 , Zhou teaches wherein the number K of intervals and/or at least one characteristic value of an architecture of the task network is optimized as a hyperparameter, (… by referencing the point to voxel mapping, the system aggregates voxel-level information from the points within each voxel for the view by applying pooling, e.g., max pooling (“ maxpool ”) or average pooling, to generate a voxel-wise feature map for the view. By performing this aggregation, the system can effectively generate the voxel feature representations even when different voxels have different numbers of points [0057]; The system can train each of the neural network components described with reference to FIG. 4 jointly with the task neural network on ground truth object detection outputs for point clouds in a set of training data. For example, when the task is object detection, the loss function used for the training of these neural networks can be an object detection loss that measures the quality of object detection outputs generated by these neural networks relative to the ground truth object detection outputs, e.g., smoothed losses for regressed values and cross entropy losses for classification outputs. Particular of examples of loss functions that can be used for the training …[0068]). and wherein for each value of the hyperparameter, test point clouds and/or validation point clouds are processed into outputs, ( … within each view, the system separately generates view-dependent features for each point and then aggregates the view-dependent features to generate voxel-level features for each voxel in the representation for the view [0058]; A s a particular example, the voxel-wise feature map can include a respective spatial location corresponding to each of the partitions of the scene in the corresponding view. For each partition that corresponds to a voxel, i.e., each partition to which at least one point was assigned during voxelization, the features at the spatial location corresponding to the partition are the voxel-level features for the corresponding voxel [0059]; The system can then process the voxel-level feature map for the view using a convolutional neural network (“convolution tower”) to generate the voxel feature representations (“context features”) for each of the voxels in the view [0060]; Finally, with each view and using the point-to-voxel mapping, the system gathers voxel features per point (“gather voxel feature per point”). In other words, for each point, the system associates the voxel feature representation for the voxel to which the point belongs with the point [0061]; By performing these operations for each view, the system generates, for each point, respective voxel feature representations for each of the views [0062]); and a deviation of the outputs obtained by the processing from target outputs, with which the test point clouds and/or validation point clouds are labeled (Fig. 4) , is used as feedback for the optimization of the hyperparameter ( The embedding neural network can be, for example, a fully-connected (FC) neural network. As a particular example, the embedding neural network can be composed of a linear layer, a batch normalization (BN) layer and a rectified linear unit ( ReLU ) layer [0053]; ( This allows for the neural network that processes the representation to generate task outputs, e.g., object detection or object classification outputs… Additionally, this specification describes a multi-view fusion architecture that can encode point features with more discriminative context information extracted from the different views, e.g., a birds-eye view and a perspective view, resulting in more accurate predictions being generated by the task neural network [0005]; The system can then generate the network input by combining the combined feature representations of the three-dimensional points. For example, the system can scatter or otherwise generate a pseudo-image (an h×w×d feature map) from the combined feature representations of the three-dimensional points [0064]; This network input, i.e., the pseudo-image, can then be processed by a task neural network, e.g., a conventional two-dimensional convolutional neural network that has been configured to perform the desired task, to generate a network output for the desired task. In some implementations, the system transforms the combined feature representation to a lower feature dimension, e.g., using a learned projection matrix, as part of generating the network input to reduce computational cost [0065]; For example, when the task is object detection, the task neural network can include a two-dimensional convolutional backbone neural network and a 3d object detection neural network head that is configured to process the output of the backbone neural network to generate an object detection output that identifies locations of objects in the point cloud, e.g., that identifies locations of bounding boxes in the image and a likelihood that each bounding box includes an object [0066]; However, the task neural network can generally have any appropriate architecture that maps a network input to an output for the desired task [0067]; The system can train each of the neural network components described with reference to FIG. 4 jointly with the task neural network on ground truth object detection outputs for point clouds in a set of training data. For example, when the task is object detection, the loss function used for the training of these neural networks can be an object detection loss that measures the quality of object detection outputs generated by these neural networks relative to the ground truth object detection outputs, e.g., smoothed losses for regressed values and cross entropy losses for classification outputs. Particular of examples of loss functions that can be used for the training …[0068]). Regarding claim 5 , Zhou teaches wherein the aggregated representation includes one or more static characteristic values of a set of the collected values of the respective measured variable (Fig. 3). Regarding claim 6 , Zhou teaches wherein a parameterized distribution function is adjusted to the collected values of the respective measured variable by varying parameters, and those values of the parameters for which the adjustment is optimal are included in the aggregated representation ( The embedding neural network can be, for example, a fully-connected (FC) neural network. As a particular example, the embedding neural network can be composed of a linear layer, a batch normalization (BN) layer and a rectified linear unit ( ReLU ) layer [0053]; The system then generates, for each of the views, a corresponding dynamic voxel representation that assigns, to each voxel of a set of voxels for the view, a variable number of three-dimensional points as described above. As a result, the system also establishes, for each view, a bi-directional mapping between voxels in the dynamic voxel representation and the three-dimensional points in the point cloud data. The established point/voxel mappings are ( F.sup.cart.sub.V ( p.sub.i ), F.sup.cart.sub.P ( v.sub.j )) and ( F.sub.spheV ( p.sub.i ), F.sub.spheP ( v.sub.j )) for the birds-eye view and the perspective view, respectively [0054, Fig. 4]; Within each view and for each voxel in the dynamic voxel representation corresponding to the view, the system processes the feature representations of the three-dimensional points assigned to the voxel to generate respective voxel feature representations of each of the three-dimensional points assigned to the voxel. In other words, the system generates a respective voxel feature representation for each voxel and then associates the voxel feature representation with each point assigned to the voxel using the established mapping [0055, 0057-0062] ; The system can train each of the neural network components described with reference to FIG. 4 jointly with the task neural network on ground truth object detection outputs for point clouds in a set of training data. For example, when the task is object detection, the loss function used for the training of these neural networks can be an object detection loss that measures the quality of object detection outputs generated by these neural networks relative to the ground truth object detection outputs, e.g., smoothed losses for regressed values and cross entropy losses for classification outputs. Particular of examples of loss functions that can be used for the training …[0068]). Regarding claim 7 , Zhou teaches wherein the task network is a classifier network mapping its input to classification scores with respect to one or more classes of a predetermined classification ( This allows for the neural network that processes the representation to generate task outputs, e.g., object detection or object classification outputs … Additionally, this specification describes a multi-view fusion architecture that can encode point features with more discriminative context information extracted from the different views, e.g., a birds-eye view and a perspective view, resulting in more accurate predictions being generated by the task neural network [0005]; The system can then generate the network input by combining the combined feature representations of the three-dimensional points. For example, the system can scatter or otherwise generate a pseudo-image (an h×w×d feature map) from the combined feature representations of the three-dimensional points [0064]; This network input, i.e., the pseudo-image, can then be processed by a task neural network, e.g., a conventional two-dimensional convolutional neural network that has been configured to perform the desired task, to generate a network output for the desired task. In some implementations, the system transforms the combined feature representation to a lower feature dimension, e.g., using a learned projection matrix, as part of generating the network input to reduce computational cost [0065]; For example, when the task is object detection, the task neural network can include a two-dimensional convolutional backbone neural network and a 3d object detection neural network head that is configured to process the output of the backbone neural network to generate an object detection output that identifies locations of objects in the point cloud, e.g., that identifies locations of bounding boxes in the image and a likelihood that each bounding box includes an object [0066]; However, the task neural network can generally have any appropriate architecture that maps a network input to an output for the desired task [0067]; The system can train each of the neural network components described with reference to FIG. 4 jointly with the task neural network on ground truth object detection outputs for point clouds in a set of training data. For example, when the task is object detection, the loss function used for the training of these neural networks can be an object detection loss that measures the quality of object detection outputs generated by these neural networks relative to the ground truth object detection outputs, e.g., smoothed losses for regressed values and cross entropy losses for classification outputs. Particular of examples of loss functions that can be used for the training …[0068]). Regarding claim 8 , Zhou teaches wherein the task network is a multilayer perceptron with fully cross-linked layers ( The embedding neural network can be, for example, a fully-connected (FC) neural network. As a particular example, the embedding neural network can be composed of a linear layer, a batch normalization (BN) layer and a rectified linear unit ( ReLU ) layer [0053]). Regarding claim 9 , Zhou teaches wherein the measurement data include point clouds with radar reflections and/or lidar reflections and/or ultrasonic reflections ( Point cloud data can be generated, for example, by using LIDAR sensors or depth camera sensors that are on-board the vehicle 102. For example, each point in the point cloud can correspond to a reflection of laser light or other radiation transmitted in a particular direction by a sensor on-board the vehicle 102 [0016]). Regarding claim 10 , Zhou teaches wherein the processing of the measurement data is repeated with the proviso that at least one value of a measured variable of the one or more measured variables is not taken into account; ( For any partition that does not correspond to a voxel, i.e., any partition to which no points were assigned during voxelization, the features at the spatial location corresponding to the partition are placeholder features, i.e., features set to zeroes or another default value [0059]); and an importance of the at least one value of the measured variable that was not taken into account is ascertained from a resulting change in an obtained output of the task network ( The system can train each of the neural network components described with reference to FIG. 4 jointly with the task neural network on ground truth object detection outputs for point clouds in a set of training data. For example, when the task is object detection, the loss function used for the training of these neural networks can be an object detection loss that measures the quality of object detection outputs generated by the these neural networks relative to the ground truth object detection outputs, e.g., smoothed losses for regressed values and cross entropy losses for classification outputs [0068]). Regarding claim 11 , Zhou teaches wherein training point clouds that are labeled (Fig. 4) with target outputs in relation to the predetermined task are selected as the measurement data; deviations of ascertained outputs of the task network from the target outputs are evaluated using a predetermined cost function; and task parameters that characterize behavior of the task network are optimized with an aim of improving the evaluation by the cost function during further processing of the training point clouds ( The system can train each of the neural network components described with reference to FIG. 4 jointly with the task neural network on ground truth object detection outputs for point clouds in a set of training data. For example, when the task is object detection, the loss function used for the training of these neural networks can be an object detection loss that measures the quality of object detection outputs generated by the these neural networks relative to the ground truth object detection outputs, e.g., smoothed losses for regressed values and cross entropy losses for classification outputs [0068]; This network input, i.e., the pseudo-image, can then be processed by a task neural network, e.g., a conventional two-dimensional convolutional neural network that has been configured to perform the desired task, to generate a network output for the desired task. In some implementations, the system transforms the combined feature representation to a lower feature dimension, e.g., using a learned projection matrix, as part of generating the network input to reduce computational cost [0065]; For example, when the task is object detection, the task neural network can include a two-dimensional convolutional backbone neural network and a 3d object detection neural network head that is configured to process the output of the backbone neural network to generate an object detection output that identifies locations of objects in the point cloud, e.g., that identifies locations of bounding boxes in the image and a likelihood that each bounding box includes an object [0066]). Regarding claim 12 , Zhou teaches wherein a control signal is ascertained from the ascertained output of the task network, and a vehicle and/or a driving assistance system and/or a robot and/or a system for monitoring regions and/or a system for quality control and/or a system for medical imaging is controlled with the control signal ( The on-board system 100 can provide the perception outputs 132 to a planning subsystem 140. When the planning subsystem 140 receives the perception outputs 132, the planning subsystem 140 can use the perception outputs 132 to generate planning decisions which plan the future trajectory of the vehicle 102. The planning decisions generated by the planning subsystem 140 can include, for example: yielding (e.g., to pedestrians identified in the perception outputs 132), stopping (e.g., at a “Stop” sign identified in the perception outputs 132), passing other vehicles identified in the perception outputs 132, adjusting vehicle lane position to accommodate a bicyclist identified in the perception outputs 132, slowing down in a school or construction zone, merging (e.g., onto a highway), and parking. The planning decisions generated by the planning subsystem 140 can be provided to a control system of the vehicle 102. The control system of the vehicle can control some or all of the operations of the vehicle by implementing the planning decisions generated by the planning system. For example, in response to receiving a planning decision to apply the brakes of the vehicle, the control system of the vehicle 102 may transmit an electronic signal to a braking control unit of the vehicle. In response to receiving the electronic signal, the braking control unit can mechanically apply the brakes of the vehicle [0019]). Regarding claim 13, Zhou teaches a machine-readable data carrier on which is stored a computer program including machine readable instructions for processing … the instructions, when executed by one or more computers and/or compute instances cause the one or more computers and/or compute instances to perform the following steps … ( Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non transitory storage medium for execution by, or to control the operation of, data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. Alternatively or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus [0070]; This specification uses the term “configured” in connection with systems and computer program components. For a system of one or more computers to be configured to perform particular operations or actions means that the system has installed on it software, firmware, hardware, or a combination of them that in operation cause the system to perform the operations or actions. For one or more computer programs to be configured to perform particular operations or actions means that the one or more programs include instructions that, when executed by data processing apparatus, cause the apparatus to perform the operations or actions [0069]). (The remaining limitations of claim 13 are rejected according to the claim 1 analysis). Regarding claim 14 , Zhou teaches one or more computers and/or compute instances for processing … the one or more computers and/or compute instances configured to … ( Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non transitory storage medium for execution by, or to control the operation of, data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. Alternatively or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus [0070]; This specification uses the term “configured” in connection with systems and computer program components. For a system of one or more computers to be configured to perform particular operations or actions means that the system has installed on it software, firmware, hardware, or a combination of them that in operation cause the system to perform the operations or actions. For one or more computer programs to be configured to perform particular operations or actions means that the one or more programs include instructions that, when executed by data processing apparatus, cause the apparatus to perform the operations or actions [0069]). (The remaining limitations of claim 13 are rejected according to the claim 1 analysis). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-3 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou in view of Caspers ( US 20210072397 A1 ). Regarding claim 2 , Zhou teaches aggregated representations…assigning value ranges of the respective measured variable, a number of points with values of the respective measured variable in the value ranges ( As shown in FIG. 3, a point cloud that includes thirteen points is partitioned into four voxels V1, V2, V3, and V4, with six points being assigned to V1, four points being assigned to V2, two points being assigned to V3, and one point being assigned to V4. Each point is also associated with features of dimension F. The voxels V1, V2, V3, and V4 are determined by partitioning the scene into a fixed number of partitions according to the particular view of the scene and the points are assigned to the voxels by assigning each point to the voxel the point belongs to according to the given view [0039]). Zhou does not explicitly teach an aggregated representation that includes a histogram which does the assigning process. Caspers teaches using a histogram in this context (A generator is provided for generating synthetic LIDAR signals from a set of LIDAR signals measured with the aid of a physical LIDAR sensor … the generator includes a random generator and a first machine learning system, which receives vectors or tensors of random values from the random generator as input, and maps each such vector, or each such tensor, onto a histogram of a synthetic LIDAR signal with the aid of an internal processing chain … The histogram representation may encompass a representation in a temporal space … The internal processing chain of the first machine learning system is parameterized by a plurality of parameters. These parameters are set in such a way that the histogram representation of the LIDAR signal, and/or at least one characteristic variable derived from this representation, essentially has/have the same distribution for the synthetic LIDAR signals as for the measured LIDAR signals … The characteristic variable may be an arbitrary variable derived from the histogram representation of the LIDAR signal … the generator only requires a feedback … as to the extent to which the instantaneous parameters result in a distribution of the histogram [0006-0015]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Zhou in view of Caspers to have each aggregated representation include a histogram that assigns to value ranges of the respective measured variable, a number of points with values of the respective measured variable in the value ranges to better understand and process the measured data and to more quickly identify outliers, peaks and subpopulations as a histogram is an effective way of showing the count of points within specific value ranges and is essential for understanding data distribution, spread, and shape that simple summary statistics , like mean or median , hide. Regarding claim 3 , Zhou teaches wherein the value ranges are ascertained by dividing a range in which collected values of the respective measured variable move into a predetermined number K of intervals ( As shown in FIG. 3, a point cloud that includes thirteen points is partitioned into four voxels V1, V2, V3, and V4, with six points being assigned to V1, four points being assigned to V2, two points being assigned to V3, and one point being assigned to V4. Each point is also associated with features of dimension F. The voxels V1, V2, V3, and V4 are determined by partitioning the scene into a fixed number of partitions according to the particular view of the scene and the points are assigned to the voxels by assigning each point to the voxel the point belongs to according to the given view [0039]). Conclusion An inquiry concerning this communication or earlier communication from the examiner should be directed to LOGAN D COONS whose telephone number is (571) 272-2698. (via email: logan.coons@uspto.gov “without a written authorization by applicant in place, the USPTO will not respond via internet e-mail to an internet correspondence” MPEP 502.02 II). The examiner can normally be reached on M-F 9:30am – 6pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice . If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SPE Shelby Turner , can be reached at (571) 2 72-6334 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LOGAN D COONS/ Examiner, Art Unit 2857 /ALEXANDER SATANOVSKY/ Primary Examiner, Art Unit 2857