Prosecution Insights
Last updated: April 19, 2026
Application No. 18/183,693

MACHINE LEARNING MODELS FOR PROCESSING DATA FROM DIFFERENT VEHICLE PLATFORMS

Non-Final OA §101§103§112
Filed
Mar 14, 2023
Examiner
KIM, SEHWAN
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
GM Cruise Holdings LLC
OA Round
1 (Non-Final)
60%
Grant Probability
Moderate
1-2
OA Rounds
4y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
86 granted / 144 resolved
+4.7% vs TC avg
Strong +66% interview lift
Without
With
+65.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
35 currently pending
Career history
179
Total Applications
across all art units

Statute-Specific Performance

§101
20.8%
-19.2% vs TC avg
§103
46.2%
+6.2% vs TC avg
§102
6.3%
-33.7% vs TC avg
§112
23.3%
-16.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 144 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Examiner’s Note The Examiner encourages Applicant to schedule an interview to discuss issues related to, for example, the rejections noted below under 35 U.S.C § 112, 101 and § 103, for moving toward allowance. Providing supporting paragraph(s) for each limitation of amended/new claim(s) in Remarks is strongly requested for clear and definite claim interpretations by Examiner. Priority Acknowledgment is made of applicant's claim for the present application filed on 03/14/2023. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 3-4, 10, 13-14, 19 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim(s) 3 recite(s) the limitation “the scene element scene” (line 4). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears it may need to read “a scene element scene” or “the scene element”, or something else. For the purposes of examination, “the scene element” is used. In addition, claim(s) 13 is/are rejected for the same reason. Claim(s) 4 recite(s) the limitation “the second portion” (line 1). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears it may need to read “a second portion”, or something else. For the purposes of examination, “a second portion” is used. In addition, claim(s) 14 is/are rejected for the same reason. Claim(s) 4 recite(s) the limitation “the one or more machine learning models” (line 2). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears it may need to read “one or more machine learning models”, or something else. For the purposes of examination, “one or more machine learning models” is used. In addition, claim(s) 14 is/are rejected for the same reason. Claim(s) 10 recite(s) the limitation “a second set of training data associated with the reference vehicle platform” (line 4). However, it also recites “the modified second set of training data associated with the target vehicle platform” (line 7). Thus, it is not clear which one is correct. It appears it may need to read “a second set of training data associated with the target vehicle platform” (line 4), or something else. For the purposes of examination, “a second set of training data associated with the target vehicle platform” (line 4) is used. In addition, claim(s) 19 is/are rejected for the same reason. Claim(s) 3-4, 10, 13-14, 19 each recite(s) limitations that raise issues of indefiniteness as set forth above, and their dependent claims are rejected at least based on their direct and/or indirect dependency from the claims listed above. Appropriate explanation and/or amendment is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a system; therefore, it falls into the statutory category of a machine. Step 2A Prong 1: The limitations of “… comprising: …: …, …; determine one or more differences between the sensor data associated with the target vehicle platform and additional sensor data associated with a reference vehicle platform, …; based on the one or more differences, map the sensor data associated with the target vehicle platform to the reference vehicle platform; and process the mapped sensor data …”, as drafted, are a machine that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). In particular, the claim recites an additional element(s) (“a memory; and one or more processors coupled to the memory, the one or more processors being configured to”, “via the one or more software models”) – using a device and/or a model to process data. The device and the model in each step are recited at a high-level of generality (i.e., as a generic computer performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. In particular, the claim recites an additional element(s) (“obtain sensor data collected by an autonomous vehicle (AV) in a scene”) – the act of receiving data. The claim is adding an insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g). The act of receiving data is recited at a high-level of generality (i.e., as a generic act of receiving performing a generic act function of receiving data) such that it amounts no more than a mere act to apply the exception using a generic act of receiving. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. In particular, the claim recites an additional element (“the AV comprising a target vehicle platform, the sensor data describing, measuring, or depicting one or more elements in the scene”, “the reference vehicle platform being associated with one or more software models that are trained to process data from the reference vehicle platform”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, with respect to integration of the abstract idea into a practical application, the additional elements of using a generic computer component to perform each step amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. MPEP 2106.05(f). As discussed above, the claim recites the additional element(s) of receiving data at a high-level of generality and is adding an insignificant extra-solution activity – see MPEP 2106.05(g). However, the addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood, routine, and conventional. See MPEP 2106.05(d)(II) – “Receiving or transmitting data over a network” or “Storing and retrieving information in memory”. Accordingly, this additional element does not provide an inventive concept and significantly more than the abstract idea. Thus, the claim is not patent eligible. This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h). Regarding claim 2 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a system; therefore, it falls into the statutory category of a machine. Step 2A Prong 1: The limitations of “at least one of transferring a first attribute of the additional sensor data to the sensor data associated with the target vehicle platform and removing a second attribute of the additional sensor data from the sensor data associated with the target vehicle platform, and …”, as drafted, are a machine that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim recites an additional element (“wherein the one or more differences comprises at least one of the first attribute and the second attribute”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h). Regarding claim 3 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a system; therefore, it falls into the statutory category of a machine. Step 2A Prong 1: The limitations of “determining that a first portion of the additional sensor data measures or depicts a scene element based on relative poses in space of the scene element scene and a sensor of the reference vehicle platform that captured the first portion of the additional sensor data that measures or depicts the scene element, …; determining that the sensor data associated with the target vehicle platform does not measure or depict the scene element; and modifying, …, the sensor data associated with the target vehicle platform to include a second portion of sensor data that measures or depicts the scene element, …, and …”, as drafted, are a machine that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim recites an additional element (“wherein the scene element comprises at least one of an object, a structure, a device, at least a portion of a person, and a condition”, “wherein the one or more differences comprises the scene element measured or depicted in the first portion of the additional sensor data”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) The claim recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). In particular, the claim recites an additional element(s) (“via one or more machine learning models”, “wherein the second portion of sensor data is generated by the one or more machine learning models”) – using a device and/or a model to process data. The device and the model in each step are recited at a high-level of generality (i.e., as a generic computer performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h). As discussed above, with respect to integration of the abstract idea into a practical application, the additional elements of using a generic computer component to perform each step amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. MPEP 2106.05(f). Regarding claim 4 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a system; therefore, it falls into the statutory category of a machine. Step 2A Prong 1: The limitations of “wherein modifying the sensor data to include the second portion of sensor data comprises removing, …, a different element in the sensor data that occludes the scene element, …”, as drafted, are a machine that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). In particular, the claim recites an additional element(s) (“via the one or more machine learning models”) – using a device and/or a model to process data. The device and the model in each step are recited at a high-level of generality (i.e., as a generic computer performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. In particular, the claim recites an additional element (“the different element comprising at least one of a different object, a different device, a different structure, at least a portion of a different person, and a different condition, and wherein the different condition comprises at least one of a lighting condition in the scene, a weather condition in the scene, sensor data noise, a brightness level of the sensor data, and an opacity level of one or more elements in the sensor data”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, with respect to integration of the abstract idea into a practical application, the additional elements of using a generic computer component to perform each step amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. MPEP 2106.05(f). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h). Regarding claim 5 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a system; therefore, it falls into the statutory category of a machine. Step 2A Prong 1: The limitations of “determining that a portion of the sensor data associated with the target vehicle platform measures or depicts an element in the scene based on relative poses of the element in the scene and a sensor of the target vehicle platform that captured the portion of the sensor data that measures or depicts the element, …; determining that the additional sensor data associated with the reference vehicle platform does not measure or depict the element; and removing, …, the element from the sensor data associated with the target vehicle platform”, as drafted, are a machine that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim recites an additional element (“wherein the element comprises at least one of an object, a structure, a device, at least a portion of a person, and a condition”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) The claim recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). In particular, the claim recites an additional element(s) (“via one or more machine learning models”) – using a device and/or a model to process data. The device and the model in each step are recited at a high-level of generality (i.e., as a generic computer performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h). As discussed above, with respect to integration of the abstract idea into a practical application, the additional elements of using a generic computer component to perform each step amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. MPEP 2106.05(f). Regarding claim 6 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a system; therefore, it falls into the statutory category of a machine. Step 2A Prong 1: The limitations of “…, and wherein mapping the sensor data associated with the target vehicle platform to the reference vehicle platform comprises modifying the sensor data to reflect the second perspective reflected in the additional sensor data”, as drafted, are a machine that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim recites an additional element (“wherein the one or more differences comprises a difference in a first sensor perspective reflected in the sensor data and a second sensor perspective reflected in the additional sensor data”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h). Regarding claim 7 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a system; therefore, it falls into the statutory category of a machine. Step 2A Prong 1: The claim recites the abstract idea identified above regarding claim 6. Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim recites an additional element (“wherein the difference in the first sensor perspective and the second sensor perspective is based on at least one of a difference in a body type of the target vehicle platform and the reference vehicle platform, a difference in a size of the target vehicle platform and the reference vehicle platform, a difference in a shape of the target vehicle platform and the reference vehicle platform, a difference in dimensions of the target vehicle platform and the reference vehicle platform, a difference between a respective pose of one or more sensors that captured the sensor data relative to one or more portions of the target vehicle platform and a respective pose of one or more additional sensors that captured the additional sensor data relative to one or more portions of the reference vehicle platform, and a difference between a first pose of the one or more sensors in three-dimensional (3D) space and a second pose of the one or more additional sensors in 3D space”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h). Regarding claim 8 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a system; therefore, it falls into the statutory category of a machine. Step 2A Prong 1: The limitations of “…, wherein mapping the sensor data associated with the target vehicle platform to the reference vehicle platform comprises mapping the data from the first type of sensor to the reference vehicle platform and the data from the second type of sensor, …”, as drafted, are a machine that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim recites an additional element (“wherein the sensor data comprises data from a first type of sensor and the additional data comprises data from a second type of sensor”, “wherein one of the first type of sensor or the second type of sensor comprises one of a light detection and ranging (LIDAR) sensor, a radio detection and ranging (RADAR) sensor, a camera sensor, an acoustic sensor, or a time-of-flight (TOF) sensor, and wherein a different one of the first type of sensor or the second type of sensor comprises a different one of the LIDAR sensor, the RADAR sensor, the camera sensor, the acoustic sensor, or the TOF sensor”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h). Regarding claim 9 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a system; therefore, it falls into the statutory category of a machine. Step 2A Prong 1: The limitations of “…, wherein mapping the sensor data associated with the target vehicle platform to the reference vehicle platform comprises mapping the data from the first type of sensor to the reference vehicle platform and the fused data from the multiple types of sensors, …”, as drafted, are a machine that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim recites an additional element (“wherein the sensor data comprises data from a first type of sensor and the additional data comprises fused data from a multiple types of sensors”, “wherein the multiple types of sensors comprise at least one of a light detection and ranging (LIDAR) sensor, a radio detection and ranging (RADAR) sensor, a camera sensor, an acoustic sensor, or a time-of-flight (TOF) sensor, and wherein a different one of the first type of sensor or the second type of sensor comprises a different one of the LIDAR sensor, the RADAR sensor, the camera sensor, the acoustic sensor, and the TOF sensor”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h). Regarding claim 10 The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claim recites a system; therefore, it falls into the statutory category of a machine. Step 2A Prong 1: The limitations of “…: generate simulation data that simulates a context of a first set of training data associated with the reference vehicle platform; based on the simulation data, modify a second set of training data associated with the reference vehicle platform to reflect the context of the first set of training data; and …, …”, as drafted, are a machine that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). In particular, the claim recites an additional element(s) (“wherein the one or more processors are configured to”, “wherein mapping the sensor data to the reference vehicle platform is done via the one or more machine learning models”) – using a device and/or a model to process data. The device and the model in each step are recited at a high-level of generality (i.e., as a generic computer performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. In particular, the claim recites an additional element(s) (“based on the first set of training data and the modified second set of training data, train one or more machine learning models to map the modified second set of training data associated with the target vehicle platform to the reference vehicle platform”). The additional element is recited at such a high level without any details as to how a model is trained such that it amounts to only the idea of a solution or outcome because it fails to recite details of how a solution to a problem is accomplished, and, therefore, represents no more than mere instructions to apply the judicial exception on a computer (see MPEP 2106.05(f)). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, with respect to integration of the abstract idea into a practical application, the additional elements of using a generic computer component to perform each step amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. MPEP 2106.05(f). The additional elements regarding training are recited at such a high level without any details as to how a model is trained such that it amounts to only the idea of a solution or outcome because it fails to recite details of how a solution to a problem is accomplished, and, therefore, represents no more than mere instructions to apply the judicial exception on a computer (see MPEP 2106.05(f)). Accordingly, this additional element does not amount to significantly more than the abstract idea. The claim is directed to an abstract idea. Regarding claim 11 The claim is rejected for the reasons set forth in the rejection of Claim 1 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 12 The claim is rejected for the reasons set forth in the rejection of Claim 2 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 13 The claim is rejected for the reasons set forth in the rejection of Claim 3 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 14 The claim is rejected for the reasons set forth in the rejection of Claim 4 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 15 The claim is rejected for the reasons set forth in the rejection of Claim 5 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 16 The claim is rejected for the reasons set forth in the rejection of a combination of Claims 6 and 7 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 17 The claim is rejected for the reasons set forth in the rejection of Claim 9 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 18 The claim is rejected for the reasons set forth in the rejection of Claim 8 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 19 The claim is rejected for the reasons set forth in the rejection of Claim 10 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception. Regarding claim 20 The claim recites “A non-transitory computer-readable medium having stored thereon instructions which, when executed by one or more processors, cause the one or more processors to:” to perform precisely the system of Claim 1. As performance of an abstract idea on generic computer components (see MPEP 2106.05(f)) and “Storing and retrieving information in memory” (see MPEP 2106.05(g) on Insignificant Extra-Solution Activity, and MPEP 2106.05(d) on Well-Understood, Routine, Conventional Activity) cannot integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself, the claim is rejected for reasons set forth in the rejection of Claim 1. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-8, 10-16, 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Saleh et al. (Domain Adaptation for Vehicle Detection from Bird’s Eye View LiDAR Point Cloud Data) in view of Yoon et al. (Mapless Online Detection of Dynamic Objects in 3D Lidar) Regarding claim 1 Saleh teaches A system comprising: (Saleh [sec(s) 1] “To this end, in this work, we will be proposing a DA approach for vehicle detection in real point cloud data from 3D LiDAR sensors represented as BEV images. The proposed DA approach will be a deep learning-based approach based on deep generative adversarial networks (GANs) [31].”;) obtain sensor data collected by an autonomous vehicle (AV) in a scene, the AV comprising a target vehicle platform, the sensor data describing, measuring, or depicting one or more elements in the scene; (Saleh [fig(s) 1] “Sample of BEV images of real point cloud data (left) from a real Velodyne 3d LiDAR from KITTI dataset [9] and a synthetic point cloud data (right) from a simulated 3D LiDAR sensor from MDLS dataset [29].” [sec(s) 1] “However, the generalisation to real-point cloud data was rather limited due to the perfectness of the synthetic point cloud data (shown in Fig. 1, right) which is missing the artefacts usually exist in point cloud data from real 3D LiDAR sensors (shown in Fig. 1, left). These artefacts are such as the variability of the LiDAR beams intensities or the motion distortion as a result of the motion of the 3D LiDAR. Domain adaptation (DA) is one of the machine learning (ML) techniques that have been recently explored to bridge the aforementioned gaps between synthetic and real data domains [27]. In DA, the goal is to learn from one data distribution (referred to as the source domain) a perfect model on a different data distribution (referred to as the target domain). In traffic environments, DA has recently shown promising results for image translation between different domain pairs such as night/day, synthetic/real images and RGB/thermal images [31]. … One of the most common techniques was to project a top-down bird’s eye view (BEV) of the point cloud data on a 2D plane (ie. ground). … To this end, in this work, we will be proposing a DA approach for vehicle detection in real point cloud data from 3D LiDAR sensors represented as BEV images. The proposed DA approach will be a deep learning-based approach based on deep generative adversarial networks (GANs) [31]. For the vehicle detection task, it will be based on state-of-the-art deep object detection architecture YOLOv3 [18].” [sec(s) Abs] “Point cloud data from 3D LiDAR sensors are one of the most crucial sensor modalities for versatile safety-critical applications such as self-driving vehicles.” [sec(s) 4.1] “The MLDS dataset was generated from high fidelity simulated urban traffic environments from the CARLA simulator [7] using a simulated Velodyne HDL-64E sensor.”;) determine one or more differences between the sensor data associated with the target vehicle platform and additional sensor data associated with a reference vehicle platform, the reference vehicle platform being associated with one or more software models that are trained to process data from the reference vehicle platform; (Saleh [fig(s) 1] “Sample of BEV images of real point cloud data (left) from a real Velodyne 3d LiDAR from KITTI dataset [9] and a synthetic point cloud data (right) from a simulated 3D LiDAR sensor from MDLS dataset [29].” [sec(s) 1] “However, the generalisation to real-point cloud data was rather limited due to the perfectness of the synthetic point cloud data (shown in Fig. 1, right) which is missing the artefacts usually exist in point cloud data from real 3D LiDAR sensors (shown in Fig. 1, left). These artefacts are such as the variability of the LiDAR beams intensities or the motion distortion as a result of the motion of the 3D LiDAR. Domain adaptation (DA) is one of the machine learning (ML) techniques that have been recently explored to bridge the aforementioned gaps between synthetic and real data domains [27].” [sec(s) 3] “The main focus of this work is to provide a framework for bridging the gap between real and synthetic point cloud data represented as BEV images for the vehicle detection task. That being said, the same framework can still be used for other perceptions tasks on point cloud data such as semantic segmentation or object tracking. In this section, we will first provide our formulation for the problem at hand. Then subsequently, we will break-down the building blocks of the proposed framework.” [sec(s) 3.1] “In our formulation for the vehicle detection task from real BEV point cloud data, we are proposing a framework consisting of two stages. In the first stage of our framework, we train a CycleGAN model between unpaired synthetic BEV point cloud data and real BEV point cloud data. The trained model, in returns, learns a transformation from synthetic BEV point cloud data to real BEV point cloud data and vice versa. As a result, given any annotated synthetic BEV point cloud dataset with vehicles, the trained CycleGAN model will transform that dataset to an annotated real-like BEV point cloud data. Finally, using the transformed dataset, we could train another ConvNet-based model for the vehicle detection task in real BEV point cloud data.”;) based on the one or more differences, map the sensor data associated with the target vehicle platform to the reference vehicle platform; and (Saleh [fig(s) 1] “Sample of BEV images of real point cloud data (left) from a real Velodyne 3d LiDAR from KITTI dataset [9] and a synthetic point cloud data (right) from a simulated 3D LiDAR sensor from MDLS dataset [29].” [sec(s) 3] “The main focus of this work is to provide a framework for bridging the gap between real and synthetic point cloud data represented as BEV images for the vehicle detection task. That being said, the same framework can still be used for other perceptions tasks on point cloud data such as semantic segmentation or object tracking. In this section, we will first provide our formulation for the problem at hand. Then subsequently, we will break-down the building blocks of the proposed framework.” [sec(s) 3.1] “In our formulation for the vehicle detection task from real BEV point cloud data, we are proposing a framework consisting of two stages. In the first stage of our framework, we train a CycleGAN model between unpaired synthetic BEV point cloud data and real BEV point cloud data. The trained model, in returns, learns a transformation from synthetic BEV point cloud data to real BEV point cloud data and vice versa. As a result, given any annotated synthetic BEV point cloud dataset with vehicles, the trained CycleGAN model will transform that dataset to an annotated real-like BEV point cloud data. Finally, using the transformed dataset, we could train another ConvNet-based model for the vehicle detection task in real BEV point cloud data.” [sec(s) 3.2] “In this work, we will be exploring the CycleGAN architecture for the task of DA between real BEV point cloud domain and synthetic BEV point cloud domain. One of the advantages of the CycleGAN architecture in the context of DA is it can learn transformation between source and target domains without any supervised one-to-one mapping between the two domains.”;) process the mapped sensor data via the one or more software models. (Saleh [fig(s) 1] “Sample of BEV images of real point cloud data (left) from a real Velodyne 3d LiDAR from KITTI dataset [9] and a synthetic point cloud data (right) from a simulated 3D LiDAR sensor from MDLS dataset [29].” [sec(s) 1] “To this end, in this work, we will be proposing a DA approach for vehicle detection in real point cloud data from 3D LiDAR sensors represented as BEV images. The proposed DA approach will be a deep learning-based approach based on deep generative adversarial networks (GANs) [31]. For the vehicle detection task, it will be based on state-of-the-art deep object detection architecture YOLOv3 [18].” [sec(s) 3] “The main focus of this work is to provide a framework for bridging the gap between real and synthetic point cloud data represented as BEV images for the vehicle detection task. That being said, the same framework can still be used for other perceptions tasks on point cloud data such as semantic segmentation or object tracking. In this section, we will first provide our formulation for the problem at hand. Then subsequently, we will break-down the building blocks of the proposed framework.” [sec(s) 3.1] “In our formulation for the vehicle detection task from real BEV point cloud data, we are proposing a framework consisting of two stages. In the first stage of our framework, we train a CycleGAN model between unpaired synthetic BEV point cloud data and real BEV point cloud data. The trained model, in returns, learns a transformation from synthetic BEV point cloud data to real BEV point cloud data and vice versa. As a result, given any annotated synthetic BEV point cloud dataset with vehicles, the trained CycleGAN model will transform that dataset to an annotated real-like BEV point cloud data. Finally, using the transformed dataset, we could train another ConvNet-based model for the vehicle detection task in real BEV point cloud data.”;) However, Saleh does not appear to explicitly teach: a memory; and one or more processors coupled to the memory, the one or more processors being configured to: Yoon teaches a memory; and one or more processors coupled to the memory, the one or more processors being configured to: (Yoon [sec(s) IV] “On a laptop with an Intel Core i7-6820HQ CPU, we currently process lidar scans at 3 Hz on average on a single thread, slower than the Velodyne scan rate (10 Hz)”;) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Saleh with the computer system of Yoon. One of ordinary skill in the art would have been motived to combine in order to explicitly compensate for the moving-while-scanning operation (motion distortion) of present-day 3D spinning lidar sensors by using a motion-compensated freespace querying algorithm and classifies between dynamic (currently moving) and static (currently stationary) labels at the point level. (Yoon [sec(s) Abs] “We explicitly compensate for the moving-while-scanning operation (motion distortion) of present-day 3D spinning lidar sensors. Our detection method uses a motion-compensated freespace querying algorithm and classifies between dynamic (currently moving) and static (currently stationary) labels at the point level.”) Regarding claim 2 The combination of Saleh, Yoon teaches claim 1. wherein mapping the sensor data associated with the target vehicle platform to the reference vehicle platform comprises (See claim 1) Saleh further teaches at least one of transferring a first attribute of the additional sensor data to the sensor data associated with the target vehicle platform and removing a second attribute of the additional sensor data from the sensor data associated with the target vehicle platform, and wherein the one or more differences comprises at least one of the first attribute and the second attribute. (Saleh [fig(s) 1] [sec(s) 2.1] “In CycleGAN, it is essentially comprised of two conditional GAN networks. The first network works on the transformation from the source domain (S) to the target domain (T), S → T, while the other one works on the transformation in the opposite direction, T → S.” [sec(s) 3.1] “In our formulation for the vehicle detection task from real BEV point cloud data, we are proposing a framework consisting of two stages. In the first stage of our framework, we train a CycleGAN model between unpaired synthetic BEV point cloud data and real BEV point cloud data. The trained model, in returns, learns a transformation from synthetic BEV point cloud data to real BEV point cloud data and vice versa. As a result, given any annotated synthetic BEV point cloud dataset with vehicles, the trained CycleGAN model will transform that dataset to an annotated real-like BEV point cloud data. Finally, using the transformed dataset, we could train another ConvNet-based model for the vehicle detection task in real BEV point cloud data.” [sec(s) 3.2] “In this work, we will be exploring the CycleGAN architecture for the task of DA between real BEV point cloud domain and synthetic BEV point cloud domain. One of the advantages of the CycleGAN architecture in the context of DA is it can learn transformation between source and target domains without any supervised one-to-one mapping between the two domains. … More formally, given our two domains S, R of the synthetic and the real BEV point cloud data domains. Then, the objective of our adopted CycleGAN-based DA approach (shown in Fig. 4) is to map between the distributions s ∼ Pd(s) and r ∼ Pd(r) from the synthetic and the real BEV point cloud domains respectively. The proposed CycleGAN-based DA approach achieve this mapping via the two generators, GS→R and GR→S and the two discriminators DS and DR. The generator GS→R will try to map the input source synthetic BEV point cloud image to some target real BEV point cloud image.”;) Regarding claim 3 The combination of Saleh, Yoon teaches claim 1. wherein mapping the sensor data associated with the target vehicle platform to the reference vehicle platform comprises: (See claim 1) Saleh further teaches determining that a first portion of the additional sensor data measures or depicts a scene element based on relative poses in space of the scene element scene and a sensor of the reference vehicle platform that captured the first portion of the additional sensor data that measures or depicts the scene element, wherein the scene element comprises at least one of an object, a structure, a device, at least a portion of a person, and a condition; (Saleh [fig(s) 1] [sec(s) 4.1] “For the task of the DA between synthetic and real BEV point cloud images, we relied on two datasets. The first dataset is the recently released Motion-Distorted LiDAR Simulation (MDLS) dataset introduced in [29]. This dataset represents the synthetic domain S of our CycleGAN-based DA approach discussed in Section 3.2. The MLDS dataset was generated from high fidelity simulated urban traffic environments from the CARLA simulator [7] using a simulated Velodyne HDL-64E sensor. … The dataset was annotated with the position of the vehicles in the scene. For our DA task, we first preprocessed the point cloud scans in order to get a BEV image of each scan according to the method introduced in [15]. As a result, we get a total of 6K BEV point cloud images similar to the right image shown in Fig. 1. The second dataset we utilised for the real domain R of our CycleGAN-based DA approach is the BEV benchmark data from the KITTI dataset [9]. The BEV benchmark data consists of 7481 training images and point cloud scans and 7518 test images and point cloud scans. The point cloud data was captured using a real 3D LiDAR sensor the Velodyne HDL-64E sensor. The dataset contains annotations for multiple objects in the traffic scene such as vehicles, pedestrians and cyclists. Similar to the pre-processing step we have done for the MLDS dataset we did it as well for the KITTI dataset in order to get BEV point cloud images like the one shown on the left in Fig. 1.”;) determining that the sensor data associated with the target vehicle platform does not measure or depict the scene element; and (Saleh [fig(s) 1] [fig(s) 2] [sec(s) 1] “These artefacts are such as the variability of the LiDAR beams intensities or the motion distortion as a result of the motion of the 3D LiDAR. Domain adaptation (DA) is one of the machine learning (ML) techniques that have been recently explored to bridge the aforementioned gaps between synthetic and real data domains [27].” [sec(s) 4.2] “Firstly, in order to evaluate the effectiveness of our proposed CycleGAN based DA approach for the vehicle detection task from real BEV point cloud images. In fig. 3, we show qualitative results of the trained CycleGAN-based DA approach between synthetic and real BEV point cloud images. In the first row of the figure is the input synthetic BEV point cloud image to our model. The second row represents the output from the generator GS→R of our trained CycleGAN model. The third row shows one sample of a real BEV point cloud image from the KITTI dataset. As it can be noticed, the generated BEV point cloud from our CycleGAN model is mimicking and trying to be consistent with the same structure exist in the real BEV point cloud image from KITTI. More specifically, the generated image captures pretty well the structure of the vehicles and the distortion/noise artefacts from resulting from the real Velodyne 3D LiDAR sensor.”;) modifying, via one or more machine learning models, the sensor data associated with the target vehicle platform to include a second portion of sensor data that measures or depicts the scene element, wherein the one or more differences comprises the scene element measured or depicted in the first portion of the additional sensor data, and wherein the second portion of sensor data is generated by the one or more machine learning models. (Saleh [fig(s) 1] [fig(s) 2] [sec(s) 1] “However, the generalisation to real-point cloud data was rather limited due to the perfectness of the synthetic point cloud data (shown in Fig. 1, right) which is missing the artefacts usually exist in point cloud data from real 3D LiDAR sensors (shown in Fig. 1, left). These artefacts are such as the variability of the LiDAR beams intensities or the motion distortion as a result of the motion of the 3D LiDAR. Domain adaptation (DA) is one of the machine learning (ML) techniques that have been recently explored to bridge the aforementioned gaps between synthetic and real data domains [27].” [sec(s) 4.2] “Firstly, in order to evaluate the effectiveness of our proposed CycleGAN based DA approach for the vehicle detection task from real BEV point cloud images. In fig. 3, we show qualitative results of the trained CycleGAN-based DA approach between synthetic and real BEV point cloud images. In the first row of the figure is the input synthetic BEV point cloud image to our model. The second row represents the output from the generator GS→R of our trained CycleGAN model. The third row shows one sample of a real BEV point cloud image from the KITTI dataset. As it can be noticed, the generated BEV point cloud from our CycleGAN model is mimicking and trying to be consistent with the same structure exist in the real BEV point cloud image from KITTI. More specifically, the generated image captures pretty well the structure of the vehicles and the distortion/noise artefacts from resulting from the real Velodyne 3D LiDAR sensor.”; e.g., “generated BEV point cloud from our CycleGAN” read(s) on “second portion”. In addition, e.g., “gaps” read(s) on “differences”.) Regarding claim 4 The combination of Saleh, Yoon teaches claim 1. Saleh further teaches wherein modifying the sensor data to include the second portion of sensor data comprises [removing], via the one or more machine learning models, a different element in the sensor data that [occludes] the scene element, the different element comprising at least one of a different object, a different device, a different structure, at least a portion of a different person, and a different condition, and wherein the different condition comprises at least one of a lighting condition in the scene, a weather condition in the scene, sensor data noise, a brightness level of the sensor data, and an opacity level of one or more elements in the sensor data. (Saleh [fig(s) 1] [fig(s) 2] [sec(s) 1] “However, the generalisation to real-point cloud data was rather limited due to the perfectness of the synthetic point cloud data (shown in Fig. 1, right) which is missing the artefacts usually exist in point cloud data from real 3D LiDAR sensors (shown in Fig. 1, left). These artefacts are such as the variability of the LiDAR beams intensities or the motion distortion as a result of the motion of the 3D LiDAR. Domain adaptation (DA) is one of the machine learning (ML) techniques that have been recently explored to bridge the aforementioned gaps between synthetic and real data domains [27].” [sec(s) 4.2] “Firstly, in order to evaluate the effectiveness of our proposed CycleGAN based DA approach for the vehicle detection task from real BEV point cloud images. In fig. 3, we show qualitative results of the trained CycleGAN-based DA approach between synthetic and real BEV point cloud images. In the first row of the figure is the input synthetic BEV point cloud image to our model. The second row represents the output from the generator GS→R of our trained CycleGAN model. The third row shows one sample of a real BEV point cloud image from the KITTI dataset. As it can be noticed, the generated BEV point cloud from our CycleGAN model is mimicking and trying to be consistent with the same structure exist in the real BEV point cloud image from KITTI. More specifically, the generated image captures pretty well the structure of the vehicles and the distortion/noise artefacts from resulting from the real Velodyne 3D LiDAR sensor.”; e.g., “generated BEV point cloud from our CycleGAN” read(s) on “second portion”.) Yoon further teaches wherein modifying the sensor data to include the second portion of sensor data comprises removing, via the one or more machine learning models, a different element in the sensor data that occludes the scene element. (Yoon [fig(s) 3] “A vehicle throughout the detection pipeline. Refer to the pipeline in Fig. 2 for corresponding letters (a) to (d).” [fig(s) 5] [sec(s) III.C] “We check all dynamic query points, which can be as many as half the query scan points, against the freespace of another scan to correct mislabels from the pointcloud comparison. Recall that points are mislabelled dynamic because of viewpoint occlusions or they are new surface observations. Dynamic points inside freespace are consistent with their current label, while ones on the border or outside freespace may not truly be dynamic. We use this argument to refine incorrect dynamic labels (see Fig. 3a and Fig. 3b). We do not check freespace for static points since a pointcloud comparison is equivalent to a freespace border check” [sec(s) III.D] “Our freespace check is susceptible to error because of finite lidar resolution (e.g., consider freespace at far ranges), leaving sparse traces of mislabelled dynamic points (see Fig. 3b). The query scan measurements are arranged into an image representation. Each laser forms a row and the consecutive measurements the columns. Dynamic labels have an image value of 1 and static labels have a value of 0. We filter outliers (dynamic mislabels) by sliding a box filter thoughout the image (see Fig. 5 for an example). We apply our filter (Fig. 5 middle) with a pixelwise XNOR (exclusive logical NOR) operation. The sum of all XNOR operations is a numerical score. Scores greater than a constant score threshold are considered outliers. The score threshold depends on the lidar resolution.”;) The combination of Saleh, Yoon is combinable with Yoon for the same rationale as set forth above with respect to claim 1. Regarding claim 5 The combination of Saleh, Yoon teaches claim 1. Saleh further teaches wherein mapping the sensor data associated with the target vehicle platform to the reference vehicle platform comprises: (See claim 1) Saleh further teaches determining that a portion of the sensor data associated with the target vehicle platform measures or depicts an element in the scene based on relative poses of the element in the scene and a sensor of the target vehicle platform that captured the portion of the sensor data that measures or depicts the element, wherein the element comprises at least one of an object, a structure, a device, at least a portion of a person, and a condition; (Saleh [fig(s) 1] [sec(s) 4.1] “The second dataset we utilised for the real domain R of our CycleGAN-based DA approach is the BEV benchmark data from the KITTI dataset [9]. The BEV benchmark data consists of 7481 training images and point cloud scans and 7518 test images and point cloud scans. The point cloud data was captured using a real 3D LiDAR sensor the Velodyne HDL-64E sensor. The dataset contains annotations for multiple objects in the traffic scene such as vehicles, pedestrians and cyclists. Similar to the pre-processing step we have done for the MLDS dataset we did it as well for the KITTI dataset in order to get BEV point cloud images like the one shown on the left in Fig. 1.” [sec(s) 1] “These artefacts are such as the variability of the LiDAR beams intensities or the motion distortion as a result of the motion of the 3D LiDAR.”; Regarding “KITTI dataset [9]”, please refer to Geiger et al. (Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite) (e.g., [sec(s) 1] “In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for stereo, optical flow, visual odometry / SLAM and 3D object detection. Our benchmarks are captured by driving around a mid-size city, in rural areas and on highways. Our recording platform is equipped with two high resolution stereo camera systems (grayscale and color), a Velodyne HDL-64E laser scanner that produces more than one million 3D points per second and a state-of-the-art OXTS RT 3003 localization system which combines GPS, GLONASS, an IMU and RTK correction signals. The cameras, laser scanner and localization system are calibrated and synchronized, providing us with accurate ground truth. Table 1 summarizes our benchmarks and provides a comparison to existing datasets”)) determining that the additional sensor data associated with the reference vehicle platform does not measure or depict the element; and (Saleh [fig(s) 1] [fig(s) 2] [sec(s) 1] “However, the generalisation to real-point cloud data was rather limited due to the perfectness of the synthetic point cloud data (shown in Fig. 1, right) which is missing the artefacts usually exist in point cloud data from real 3D LiDAR sensors (shown in Fig. 1, left). These artefacts are such as the variability of the LiDAR beams intensities or the motion distortion as a result of the motion of the 3D LiDAR. Domain adaptation (DA) is one of the machine learning (ML) techniques that have been recently explored to bridge the aforementioned gaps between synthetic and real data domains [27].” [sec(s) 4.2] “Firstly, in order to evaluate the effectiveness of our proposed CycleGAN based DA approach for the vehicle detection task from real BEV point cloud images. In fig. 3, we show qualitative results of the trained CycleGAN-based DA approach between synthetic and real BEV point cloud images. In the first row of the figure is the input synthetic BEV point cloud image to our model. The second row represents the output from the generator GS→R of our trained CycleGAN model. The third row shows one sample of a real BEV point cloud image from the KITTI dataset. As it can be noticed, the generated BEV point cloud from our CycleGAN model is mimicking and trying to be consistent with the same structure exist in the real BEV point cloud image from KITTI. More specifically, the generated image captures pretty well the structure of the vehicles and the distortion/noise artefacts from resulting from the real Velodyne 3D LiDAR sensor.”;) [removing], via one or more machine learning models, the element from the sensor data associated with the target vehicle platform. (Saleh [fig(s) 1] [fig(s) 2] [sec(s) 1] “Domain adaptation (DA) is one of the machine learning (ML) techniques that have been recently explored to bridge the aforementioned gaps between synthetic and real data domains [27].” [sec(s) 4.2] “In the first row of the figure is the input synthetic BEV point cloud image to our model. The second row represents the output from the generator GS→R of our trained CycleGAN model. The third row shows one sample of a real BEV point cloud image from the KITTI dataset. As it can be noticed, the generated BEV point cloud from our CycleGAN model is mimicking and trying to be consistent with the same structure exist in the real BEV point cloud image from KITTI. More specifically, the generated image captures pretty well the structure of the vehicles and the distortion/noise artefacts from resulting from the real Velodyne 3D LiDAR sensor.” [sec(s) 2.1] “In CycleGAN, it is essentially comprised of two conditional GAN networks. The first network works on the transformation from the source domain (S) to the target domain (T), S → T, while the other one works on the transformation in the opposite direction, T → S.” [sec(s) 3.1] “The trained model, in returns, learns a transformation from synthetic BEV point cloud data to real BEV point cloud data and vice versa.” [sec(s) 3.2] “The proposed CycleGAN-based DA approach achieve this mapping via the two generators, GS→R and GR→S and the two discriminators DS and DR.”;) Yoon further teaches removing, via one or more machine learning models, the element from the sensor data associated with the target vehicle platform. (Yoon [fig(s) 3] “A vehicle throughout the detection pipeline. Refer to the pipeline in Fig. 2 for corresponding letters (a) to (d).” [fig(s) 5] [sec(s) III.C] “We check all dynamic query points, which can be as many as half the query scan points, against the freespace of another scan to correct mislabels from the pointcloud comparison. Recall that points are mislabelled dynamic because of viewpoint occlusions or they are new surface observations. Dynamic points inside freespace are consistent with their current label, while ones on the border or outside freespace may not truly be dynamic. We use this argument to refine incorrect dynamic labels (see Fig. 3a and Fig. 3b). We do not check freespace for static points since a pointcloud comparison is equivalent to a freespace border check” [sec(s) III.D] “Our freespace check is susceptible to error because of finite lidar resolution (e.g., consider freespace at far ranges), leaving sparse traces of mislabelled dynamic points (see Fig. 3b). The query scan measurements are arranged into an image representation. Each laser forms a row and the consecutive measurements the columns. Dynamic labels have an image value of 1 and static labels have a value of 0. We filter outliers (dynamic mislabels) by sliding a box filter thoughout the image (see Fig. 5 for an example). We apply our filter (Fig. 5 middle) with a pixelwise XNOR (exclusive logical NOR) operation. The sum of all XNOR operations is a numerical score. Scores greater than a constant score threshold are considered outliers. The score threshold depends on the lidar resolution.”;) The combination of Saleh, Yoon is combinable with Yoon for the same rationale as set forth above with respect to claim 1. Regarding claim 6 The combination of Saleh, Yoon teaches claim 1. Saleh further teaches wherein the one or more differences comprises a difference in a first sensor perspective reflected in the sensor data and a second sensor perspective reflected in the additional sensor data, and wherein mapping the sensor data associated with the target vehicle platform to the reference vehicle platform comprises modifying the sensor data to reflect the second perspective reflected in the additional sensor data. (Saleh [fig(s) 2] [sec(s) 1] “However, the generalisation to real-point cloud data was rather limited due to the perfectness of the synthetic point cloud data (shown in Fig. 1, right) which is missing the artefacts usually exist in point cloud data from real 3D LiDAR sensors (shown in Fig. 1, left). These artefacts are such as the variability of the LiDAR beams intensities or the motion distortion as a result of the motion of the 3D LiDAR. Domain adaptation (DA) is one of the machine learning (ML) techniques that have been recently explored to bridge the aforementioned gaps between synthetic and real data domains [27].” [sec(s) 3.1] “In the first stage of our framework, we train a CycleGAN model between unpaired synthetic BEV point cloud data and real BEV point cloud data. The trained model, in returns, learns a transformation from synthetic BEV point cloud data to real BEV point cloud data and vice versa.” [sec(s) 4.2] “the generated BEV point cloud from our CycleGAN model is mimicking and trying to be consistent with the same structure exist in the real BEV point cloud image from KITTI. More specifically, the generated image captures pretty well the structure of the vehicles and the distortion/noise artefacts from resulting from the real Velodyne 3D LiDAR sensor.”; e.g., “transformation from synthetic BEV point cloud data to real BEV point cloud data and vice versa” read(s) on “modifying the sensor data to reflect the second perspective reflected in the additional sensor data”.) Regarding claim 7 The combination of Saleh, Yoon teaches claim 6. Saleh further teaches wherein the difference in the first sensor perspective and the second sensor perspective is based on at least one of a difference in a body type of the target vehicle platform and the reference vehicle platform, a difference in a size of the target vehicle platform and the reference vehicle platform, a difference in a shape of the target vehicle platform and the reference vehicle platform, a difference in dimensions of the target vehicle platform and the reference vehicle platform, a difference between a respective pose of one or more sensors that captured the sensor data relative to one or more portions of the target vehicle platform and a respective pose of one or more additional sensors that captured the additional sensor data relative to one or more portions of the reference vehicle platform, and a difference between a first pose of the one or more sensors in three-dimensional (3D) space and a second pose of the one or more additional sensors in 3D space. (Saleh [fig(s) 2] [sec(s) 1] “However, the generalisation to real-point cloud data was rather limited due to the perfectness of the synthetic point cloud data (shown in Fig. 1, right) which is missing the artefacts usually exist in point cloud data from real 3D LiDAR sensors (shown in Fig. 1, left). These artefacts are such as the variability of the LiDAR beams intensities or the motion distortion as a result of the motion of the 3D LiDAR. Domain adaptation (DA) is one of the machine learning (ML) techniques that have been recently explored to bridge the aforementioned gaps between synthetic and real data domains [27].” [sec(s) 3.1] “In the first stage of our framework, we train a CycleGAN model between unpaired synthetic BEV point cloud data and real BEV point cloud data. The trained model, in returns, learns a transformation from synthetic BEV point cloud data to real BEV point cloud data and vice versa.” [sec(s) 4.2] “the generated BEV point cloud from our CycleGAN model is mimicking and trying to be consistent with the same structure exist in the real BEV point cloud image from KITTI” [sec(s) 4.1] “For the task of the DA between synthetic and real BEV point cloud images, we relied on two datasets. The first dataset is the recently released Motion-Distorted LiDAR Simulation (MDLS) dataset introduced in [29]. This dataset represents the synthetic domain S of our CycleGAN-based DA approach discussed in Section 3.2. The MLDS dataset was generated from high fidelity simulated urban traffic environments from the CARLA simulator [7] using a simulated Velodyne HDL-64E sensor. … The dataset consists of two sequences of point cloud data from urban traffic environment involving between 60 to 90 moving vehicle, each one with an average duration of five minutes which results in total 6K point cloud scans. The dataset was annotated with the position of the vehicles in the scene. ... The second dataset we utilised for the real domain R of our CycleGAN-based DA approach is the BEV benchmark data from the KITTI dataset [9]. The BEV benchmark data consists of 7481 training images and point cloud scans and 7518 test images and point cloud scans. The point cloud data was captured using a real 3D LiDAR sensor the Velodyne HDL-64E sensor.”;) Regarding claim 8 The combination of Saleh, Yoon teaches claim 1. Saleh further teaches wherein the sensor data comprises data from a first type of sensor and the additional data comprises data from a second type of sensor, wherein mapping the sensor data associated with the target vehicle platform to the reference vehicle platform comprises mapping the data from the first type of sensor to the reference vehicle platform and the data from the second type of sensor, wherein one of the first type of sensor or the second type of sensor comprises one of a light detection and ranging (LIDAR) sensor, a radio detection and ranging (RADAR) sensor, a camera sensor, an acoustic sensor, or a time-of-flight (TOF) sensor, and wherein a different one of the first type of sensor or the second type of sensor comprises a different one of the LIDAR sensor, the RADAR sensor, the camera sensor, the acoustic sensor, or the TOF sensor. (Saleh [fig(s) 2] [sec(s) 1] “However, the generalisation to real-point cloud data was rather limited due to the perfectness of the synthetic point cloud data (shown in Fig. 1, right) which is missing the artefacts usually exist in point cloud data from real 3D LiDAR sensors (shown in Fig. 1, left). These artefacts are such as the variability of the LiDAR beams intensities or the motion distortion as a result of the motion of the 3D LiDAR. Domain adaptation (DA) is one of the machine learning (ML) techniques that have been recently explored to bridge the aforementioned gaps between synthetic and real data domains [27].” [sec(s) 3.1] “In the first stage of our framework, we train a CycleGAN model between unpaired synthetic BEV point cloud data and real BEV point cloud data. The trained model, in returns, learns a transformation from synthetic BEV point cloud data to real BEV point cloud data and vice versa.” [sec(s) 4.2] “the generated BEV point cloud from our CycleGAN model is mimicking and trying to be consistent with the same structure exist in the real BEV point cloud image from KITTI” [sec(s) 4.1] “For the task of the DA between synthetic and real BEV point cloud images, we relied on two datasets. The first dataset is the recently released Motion-Distorted LiDAR Simulation (MDLS) dataset introduced in [29]. This dataset represents the synthetic domain S of our CycleGAN-based DA approach discussed in Section 3.2. The MLDS dataset was generated from high fidelity simulated urban traffic environments from the CARLA simulator [7] using a simulated Velodyne HDL-64E sensor. … The dataset consists of two sequences of point cloud data from urban traffic environment involving between 60 to 90 moving vehicle, each one with an average duration of five minutes which results in total 6K point cloud scans. The dataset was annotated with the position of the vehicles in the scene. ... The second dataset we utilised for the real domain R of our CycleGAN-based DA approach is the BEV benchmark data from the KITTI dataset [9]. The BEV benchmark data consists of 7481 training images and point cloud scans and 7518 test images and point cloud scans. The point cloud data was captured using a real 3D LiDAR sensor the Velodyne HDL-64E sensor.”; e.g., “real 3D LiDAR sensor the Velodyne HDL-64E sensor” and “simulated Velodyne HDL-64E sensor” read(s) on “first type of sensor” and “second type of sensor”.) Regarding claim 10 The combination of Saleh, Yoon teaches claim 1. Saleh further teaches generate simulation data that simulates a context of a first set of training data associated with the reference vehicle platform; (Saleh [fig(s) 2] [sec(s) 4.1] “The first dataset is the recently released Motion-Distorted LiDAR Simulation (MDLS) dataset introduced in [29]. This dataset represents the synthetic domain S of our CycleGAN-based DA approach discussed in Section 3.2. The MLDS dataset was generated from high fidelity simulated urban traffic environments from the CARLA simulator [7] using a simulated Velodyne HDL-64E sensor. … The dataset consists of two sequences of point cloud data from urban traffic environment involving between 60 to 90 moving vehicle, each one with an average duration of five minutes which results in total 6K point cloud scans. The dataset was annotated with the position of the vehicles in the scene.”;) based on the simulation data, modify a second set of training data associated with the reference vehicle platform to reflect the context of the first set of training data; and (Saleh [fig(s) 1] [sec(s) 2.1] “In CycleGAN, it is essentially comprised of two conditional GAN networks. The first network works on the transformation from the source domain (S) to the target domain (T), S → T, while the other one works on the transformation in the opposite direction, T → S.” [sec(s) 3.1] “In our formulation for the vehicle detection task from real BEV point cloud data, we are proposing a framework consisting of two stages. In the first stage of our framework, we train a CycleGAN model between unpaired synthetic BEV point cloud data and real BEV point cloud data. The trained model, in returns, learns a transformation from synthetic BEV point cloud data to real BEV point cloud data and vice versa. As a result, given any annotated synthetic BEV point cloud dataset with vehicles, the trained CycleGAN model will transform that dataset to an annotated real-like BEV point cloud data. Finally, using the transformed dataset, we could train another ConvNet-based model for the vehicle detection task in real BEV point cloud data.” [sec(s) 3.2] “In this work, we will be exploring the CycleGAN architecture for the task of DA between real BEV point cloud domain and synthetic BEV point cloud domain. One of the advantages of the CycleGAN architecture in the context of DA is it can learn transformation between source and target domains without any supervised one-to-one mapping between the two domains. … More formally, given our two domains S, R of the synthetic and the real BEV point cloud data domains. Then, the objective of our adopted CycleGAN-based DA approach (shown in Fig. 4) is to map between the distributions s ∼ Pd(s) and r ∼ Pd(r) from the synthetic and the real BEV point cloud domains respectively. The proposed CycleGAN-based DA approach achieve this mapping via the two generators, GS→R and GR→S and the two discriminators DS and DR. The generator GS→R will try to map the input source synthetic BEV point cloud image to some target real BEV point cloud image.”;) based on the first set of training data and the modified second set of training data, train one or more machine learning models to map the modified second set of training data associated with the target vehicle platform to the reference vehicle platform, wherein mapping the sensor data to the reference vehicle platform is done via the one or more machine learning models. (Saleh [fig(s) 1] [sec(s) 2.1] “In CycleGAN, it is essentially comprised of two conditional GAN networks. The first network works on the transformation from the source domain (S) to the target domain (T), S → T, while the other one works on the transformation in the opposite direction, T → S.” [sec(s) 3.1] “In our formulation for the vehicle detection task from real BEV point cloud data, we are proposing a framework consisting of two stages. In the first stage of our framework, we train a CycleGAN model between unpaired synthetic BEV point cloud data and real BEV point cloud data. The trained model, in returns, learns a transformation from synthetic BEV point cloud data to real BEV point cloud data and vice versa. As a result, given any annotated synthetic BEV point cloud dataset with vehicles, the trained CycleGAN model will transform that dataset to an annotated real-like BEV point cloud data. Finally, using the transformed dataset, we could train another ConvNet-based model for the vehicle detection task in real BEV point cloud data.” [sec(s) 3.2] “In this work, we will be exploring the CycleGAN architecture for the task of DA between real BEV point cloud domain and synthetic BEV point cloud domain. One of the advantages of the CycleGAN architecture in the context of DA is it can learn transformation between source and target domains without any supervised one-to-one mapping between the two domains. … More formally, given our two domains S, R of the synthetic and the real BEV point cloud data domains. Then, the objective of our adopted CycleGAN-based DA approach (shown in Fig. 4) is to map between the distributions s ∼ Pd(s) and r ∼ Pd(r) from the synthetic and the real BEV point cloud domains respectively. The proposed CycleGAN-based DA approach achieve this mapping via the two generators, GS→R and GR→S and the two discriminators DS and DR. The generator GS→R will try to map the input source synthetic BEV point cloud image to some target real BEV point cloud image.”;) Regarding claim 11 The claim is a method claim corresponding to the system claim 1, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the system claim. Regarding claim 12 The claim is a method claim corresponding to the system claim 2, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the system claim. Regarding claim 13 The claim is a method claim corresponding to the system claim 3, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the system claim. Regarding claim 14 The claim is a method claim corresponding to the system claim 4, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the system claim. Regarding claim 15 The claim is a method claim corresponding to the system claim 5, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the system claim. Regarding claim 16 The claim is a method claim corresponding to a combination of the system claims 6 and 7, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the combination of the system claims. Regarding claim 18 The claim is a method claim corresponding to the system claim 8, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the system claim. Regarding claim 19 The claim is a method claim corresponding to the system claim 10, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the system claim. Regarding claim 20 The claim is a computer-readable medium claim corresponding to the system claim 1, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the system claim. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Saleh et al. (Domain Adaptation for Vehicle Detection from Bird’s Eye View LiDAR Point Cloud Data) in view of Yoon et al. (Mapless Online Detection of Dynamic Objects in 3D Lidar) in view of Ros et al. (The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes) Regarding claim 9 The combination of Saleh, Yoon teaches claim 1. Saleh further teaches wherein the sensor data comprises data from a first type of sensor and the additional data comprises [fused] data from a [multiple types of] sensor[s], wherein mapping the sensor data associated with the target vehicle platform to the reference vehicle platform comprises mapping the data from the first type of sensor to the reference vehicle platform and the [fused] data from the [multiple types of] sensor[s], wherein the [multiple types of] sensor[s] comprise at least one of a light detection and ranging (LIDAR) sensor, a radio detection and ranging (RADAR) sensor, a camera sensor, an acoustic sensor, or a time-of-flight (TOF) sensor, and wherein a different one of the first type of sensor or the second type of sensor comprises a different one of the LIDAR sensor, the RADAR sensor, the camera sensor, the acoustic sensor, and the TOF sensor. (Saleh [fig(s) 2] [sec(s) 1] “However, the generalisation to real-point cloud data was rather limited due to the perfectness of the synthetic point cloud data (shown in Fig. 1, right) which is missing the artefacts usually exist in point cloud data from real 3D LiDAR sensors (shown in Fig. 1, left). These artefacts are such as the variability of the LiDAR beams intensities or the motion distortion as a result of the motion of the 3D LiDAR. Domain adaptation (DA) is one of the machine learning (ML) techniques that have been recently explored to bridge the aforementioned gaps between synthetic and real data domains [27].” [sec(s) 3.1] “In the first stage of our framework, we train a CycleGAN model between unpaired synthetic BEV point cloud data and real BEV point cloud data. The trained model, in returns, learns a transformation from synthetic BEV point cloud data to real BEV point cloud data and vice versa.” [sec(s) 4.2] “the generated BEV point cloud from our CycleGAN model is mimicking and trying to be consistent with the same structure exist in the real BEV point cloud image from KITTI” [sec(s) 4.1] “For the task of the DA between synthetic and real BEV point cloud images, we relied on two datasets. The first dataset is the recently released Motion-Distorted LiDAR Simulation (MDLS) dataset introduced in [29]. This dataset represents the synthetic domain S of our CycleGAN-based DA approach discussed in Section 3.2. The MLDS dataset was generated from high fidelity simulated urban traffic environments from the CARLA simulator [7] using a simulated Velodyne HDL-64E sensor. … The dataset consists of two sequences of point cloud data from urban traffic environment involving between 60 to 90 moving vehicle, each one with an average duration of five minutes which results in total 6K point cloud scans. The dataset was annotated with the position of the vehicles in the scene. ... The second dataset we utilised for the real domain R of our CycleGAN-based DA approach is the BEV benchmark data from the KITTI dataset [9]. The BEV benchmark data consists of 7481 training images and point cloud scans and 7518 test images and point cloud scans. The point cloud data was captured using a real 3D LiDAR sensor the Velodyne HDL-64E sensor.”;) However, the combination of Saleh, Yoon does not appear to explicitly teach: the additional data comprises [fused] data from a [multiple types of] sensor[s], … mapping the data from the first type of sensor to the reference vehicle platform and the [fused] data from the [multiple types of] sensor[s], wherein the [multiple types of] sensor[s] comprise at least one of a light detection and ranging (LIDAR) sensor, a radio detection and ranging (RADAR) sensor, a camera sensor, an acoustic sensor, or a time-of-flight (TOF) sensor. Ros teaches the additional data comprises fused data from a multiple types of sensors, … mapping the data from the first type of sensor to the reference vehicle platform and the fused data from the multiple types of sensors, wherein the multiple types of sensors comprise at least one of a light detection and ranging (LIDAR) sensor, a radio detection and ranging (RADAR) sensor, a camera sensor, an acoustic sensor, or a time-of-flight (TOF) sensor. (Ros [fig(s) 4] “Virtual car setup used for acquisition. Two multicameras with four monocular cameras are used. The baseline between the cameras is 0.8m and the FOV of the cameras is 100 deg.” [fig(s) 5-7] [sec(s) 3] “SYNTHIA-Seqs simulates four video sequences of approximately 50,000 frames each one up to a total of 200,000 frames, acquired from a virtual car across different seasons (one sequence per season). The virtual acquisition platform consists of two multi-cameras separated by a baseline B = 0.8m in the x-axis. Each of these multi-cameras consists of four monocular cameras with a common center and orientations varying every 90 degrees, as depicted in Fig. 4. Since all cameras have a field of view of 100 degrees the visual overlapping serves to create an omnidirectional view on demand, as shown in Fig. 5. Each of these cameras also has a virtual depth sensor associated, which works in a range from 1.5 to 50 meters and is perfectly aligned with the camera center, resolution and field of view (Fig. 5, bottom). The virtual vehicle moves through the city interacting with dynamic objects such as pedestrians and cyclists that present dynamic behaviour. This interaction produces changes in the trajectory and speed of the vehicle and leads to variations of each of the individual video sequences. This collection is oriented to provide data to exploit spatio-temporal constraints of the objects.”;) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Saleh, Yoon with the multiple types of sensors of Ros. One of ordinary skill in the art would have been motived to combine in order to show that the use of synthetic data helps to improve semantic segmentation results on real imagery. (Ros [sec(s) 4.2] “The aim of this work is to show that the use of synthetic data helps to improve semantic segmentation results on real imagery. There exist several ways to exploit synthetic data for this purpose. A trivial option would be to use the synthetic data alone for training a model and then apply it on real images. However, due to domain shift [37, 42] this approach does not usually perform well. An alternative is to train a model on the vast amount of synthetic images and afterwards fine-tuning it on a reduced set of real images. This leads to better results, since the statistics of the real domain are considered during the second stage of training [27].”) Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Saleh et al. (Domain Adaptation for Vehicle Detection from Bird’s Eye View LiDAR Point Cloud Data) in view of Yoon et al. (Mapless Online Detection of Dynamic Objects in 3D Lidar) in view of Geiger et al. (Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite) Regarding claim 17 The combination of Saleh, Yoon teaches claim 1. Saleh further teaches wherein the sensor data comprises [fused] data from [multiple types of] sensor[s] and the additional data comprises data from a first type of sensor, wherein mapping the sensor data associated with the target vehicle platform to the reference vehicle platform comprises mapping the [fused] data from the [multiple types of] sensor[s] to the reference vehicle platform and the data from the first type of sensor, wherein the [multiple types of] sensor[s] comprise at least one of a light detection and ranging (LID AR) sensor, a radio detection and ranging (RADAR) sensor, a camera sensor, an acoustic sensor, or a time-of-flight (TOF) sensor, and wherein a different one of the first type of sensor or the second type of sensor comprises a different one of the LID AR sensor, the RADAR sensor, the camera sensor, the acoustic sensor, and the TOF sensor. (Saleh [fig(s) 2] [sec(s) 1] “However, the generalisation to real-point cloud data was rather limited due to the perfectness of the synthetic point cloud data (shown in Fig. 1, right) which is missing the artefacts usually exist in point cloud data from real 3D LiDAR sensors (shown in Fig. 1, left). These artefacts are such as the variability of the LiDAR beams intensities or the motion distortion as a result of the motion of the 3D LiDAR. Domain adaptation (DA) is one of the machine learning (ML) techniques that have been recently explored to bridge the aforementioned gaps between synthetic and real data domains [27].” [sec(s) 3.1] “In the first stage of our framework, we train a CycleGAN model between unpaired synthetic BEV point cloud data and real BEV point cloud data. The trained model, in returns, learns a transformation from synthetic BEV point cloud data to real BEV point cloud data and vice versa.” [sec(s) 4.2] “the generated BEV point cloud from our CycleGAN model is mimicking and trying to be consistent with the same structure exist in the real BEV point cloud image from KITTI” [sec(s) 4.1] “For the task of the DA between synthetic and real BEV point cloud images, we relied on two datasets. The first dataset is the recently released Motion-Distorted LiDAR Simulation (MDLS) dataset introduced in [29]. This dataset represents the synthetic domain S of our CycleGAN-based DA approach discussed in Section 3.2. The MLDS dataset was generated from high fidelity simulated urban traffic environments from the CARLA simulator [7] using a simulated Velodyne HDL-64E sensor. … The dataset consists of two sequences of point cloud data from urban traffic environment involving between 60 to 90 moving vehicle, each one with an average duration of five minutes which results in total 6K point cloud scans. The dataset was annotated with the position of the vehicles in the scene. ... The second dataset we utilised for the real domain R of our CycleGAN-based DA approach is the BEV benchmark data from the KITTI dataset [9]. The BEV benchmark data consists of 7481 training images and point cloud scans and 7518 test images and point cloud scans. The point cloud data was captured using a real 3D LiDAR sensor the Velodyne HDL-64E sensor.”;) However, the combination of Saleh, Yoon does not appear to explicitly teach: wherein the sensor data comprises [fused] data from [multiple types of] sensor[s] …, … mapping the [fused] data from the [multiple types of] sensor[s] to the reference vehicle platform and the data from the first type of sensor, wherein the [multiple types of] sensor[s] comprise at least one of a light detection and ranging (LID AR) sensor, a radio detection and ranging (RADAR) sensor, a camera sensor, an acoustic sensor, or a time-of-flight (TOF) sensor, and wherein a different one of the first type of sensor or the second type of sensor comprises a different one of the LID AR sensor, the RADAR sensor, the camera sensor, the acoustic sensor, and the TOF sensor. Geiger teaches wherein the sensor data comprises fused data from multiple types of sensors …, … mapping the fused data from the multiple types of sensors to the reference vehicle platform and the data from the first type of sensor, wherein the multiple types of sensor[s] comprise at least one of a light detection and ranging (LID AR) sensor, a radio detection and ranging (RADAR) sensor, a camera sensor, an acoustic sensor, or a time-of-flight (TOF) sensor, and wherein a different one of the first type of sensor or the second type of sensor comprises a different one of the LID AR sensor, the RADAR sensor, the camera sensor, the acoustic sensor, and the TOF sensor. (Geiger [sec(s) 1] “In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for stereo, optical flow, visual odometry / SLAM and 3D object detection. Our benchmarks are captured by driving around a mid-size city, in rural areas and on highways. Our recording platform is equipped with two high resolution stereo camera systems (grayscale and color), a Velodyne HDL-64E laser scanner that produces more than one million 3D points per second and a state-of-the-art OXTS RT 3003 localization system which combines GPS, GLONASS, an IMU and RTK correction signals. The cameras, laser scanner and localization system are calibrated and synchronized, providing us with accurate ground truth. Table 1 summarizes our benchmarks and provides a comparison to existing datasets.” [sec(s) 2.1] “We equipped a standard station wagon with two color and two grayscale PointGrey Flea2 video cameras (10 Hz, resolution: 1392×512 pixels, opening: 90◦ ×35◦ ), a Velodyne HDL-64E 3D laser scanner (10 Hz, 64 laser beams, range: 100 m), a GPS/IMU localization unit with RTK correction signals (open sky localization errors < 5 cm) and a powerful computer running a real-time database [22].”;) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Saleh, Yoon with the multiple types of sensors of Geiger. One of ordinary skill in the art would have been motived to combine in order to provide novel challenging benchmarks for stereo, optical flow, visual odometry / SLAM and 3D object detection, by driving around a mid-size city, in rural areas and on highways, so that a representative set of state-of-the-art systems using different benchmarks and metrics can be evaluated. (Geiger [sec(s) 1] “In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for stereo, optical flow, visual odometry / SLAM and 3D object detection. Our benchmarks are captured by driving around a mid-size city, in rural areas and on highways. … In our experiments, we evaluate a representative set of state-of-the-art systems using our benchmarks and novel metrics. Perhaps not surprisingly, many algorithms that do well on established datasets such as Middlebury [41, 2] struggle on our benchmark. We conjecture that this might be due to their assumptions which are violated in our scenarios, as well as overfitting to a small set of training (test) images”) Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Dosovitskiy et al. (CARLA: An Open Urban Driving Simulator) teaches Motion-Distorted LiDAR Simulation (MDLS) dataset. Cabon et al. (VIRTUAL KITTI 2) teaches a virtual clone of the real KITTI. Menze et al. (Object Scene Flow for Autonomous Vehicles) teaches a dataset for Autonomous Vehicles. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEHWAN KIM whose telephone number is (571)270-7409. The examiner can normally be reached Mon - Fri 9:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J Huntley can be reached on (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SEHWAN KIM/Examiner, Art Unit 2129 2/21/2026
Read full office action

Prosecution Timeline

Mar 14, 2023
Application Filed
Feb 21, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602595
SYSTEM AND METHOD OF USING A KNOWLEDGE REPRESENTATION FOR FEATURES IN A MACHINE LEARNING CLASSIFIER
2y 5m to grant Granted Apr 14, 2026
Patent 12602580
Dataset Dependent Low Rank Decomposition Of Neural Networks
2y 5m to grant Granted Apr 14, 2026
Patent 12602581
Systems and Methods for Out-of-Distribution Detection
2y 5m to grant Granted Apr 14, 2026
Patent 12602606
APPARATUSES, COMPUTER-IMPLEMENTED METHODS, AND COMPUTER PROGRAM PRODUCTS FOR IMPROVED GLOBAL QUBIT POSITIONING IN A QUANTUM COMPUTING ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12541722
MACHINE LEARNING TECHNIQUES FOR VALIDATING AND MUTATING OUTPUTS FROM PREDICTIVE SYSTEMS
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
60%
Grant Probability
99%
With Interview (+65.6%)
4y 1m
Median Time to Grant
Low
PTA Risk
Based on 144 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month