DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/04/2025 has been entered.
Claims 1-2, 4-5, 7-8, 11-12, 14-15, 17-18 and 21-28 are currently pending and examined below. Claims 1-2, 5, 7-8, 11, 15 and 17-18 have been amended. Claims 25-28 have been added.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/25/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner.
Response to Amendment
Applicant's arguments, see pages 7-13, filed 11/04/2025, with respect to the rejection(s) of claim(s) 1-2, 4-9, 11-12, 14-19 and 21-24 under 35 U.S.C. 103 have been fully considered but they are not persuasive.
Farabet teaches a GPU server 108 that is located away from the vehicle and takes the log sensor of a run to construct a simulated environment and virtual sensor data of the run using DNN and autonomous driving stacks. Virtual sensor/objects are validated for accuracy or other criteria. The Examiner’s understanding of KPI encompasses a metric computing the accuracy of virtual sensor data outputted from the DNN with the logged sensor data ([0046] Key performance indicators (KPIs) and/or metrics may be computed for one or more of the current DNNs.. one condition dimension may include properties of the whole image or frame, such as, but not limited to,..sensor (e.g., camera) properties such as position and/or lens type, and/or a combination thereof. The conditions or a combination of the conditions which the current DNNs are not considered to perform sufficiently well on (e.g., have an accuracy below a desired or required level) may be used to direct mining and labeling of data (e.g., additional data) that may increase the accuracy of the DNNs with reference to the conditions or combination of conditions). In the example given in [0046], when virtual sensor data is not performed well, this essentially implies that virtual sensor data is compared to the ground truth aka logged sensor data, the comparison does not meet a threshold on a metric to determine that the KPI of this particular DNN is low. Thus, the Applicant’s argument is not persuasive and Farabet taught most of the claimed features. Farabet is not specific about the initial detection of an object, but Zapolsky teaches this.
Specifically, Zapolsky takes the initially detected object aka previously perceived information or logged sensor data, runs a simulation on machine perception, then the system uses the subsequently perceived information to improve predictions. Zapolsky relay this in [0006]-[0007] of Zapolsky, in [0006] “when the system initially detects the object and generates the simulation to include the object and predictions of a subsequent state of the object, the system can then further extrapolate the prediction to maintain awareness of the object even when the object is not necessarily perceived in a subsequent acquisition of information” and in [0007] “the system compares the predicted subsequent state with a presently perceived state from the presently captured information, the comparison can provide for updating the present representation in the simulation to correspond with the actual state of the object and/or improving/updating the simulation model to reflect actual perceived dynamic behaviors of the object”.
Farabet and Zapolsky are considered to be analogous to the claimed invention because they are in the same field of simulation, and more specifically, both are simulation on machine perception. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Farabet’s simulation to further incorporate Zapolsky’s simulation for the advantage of simulating a logged object as soon as it is initially detected in real time to simulate at time steps that are faster than real-time which results in to provide for anticipating motion and interactions with the object within the surrounding environment (Zapolsky’s [0005]).
Both are simulation on machine perception and validation of the simulation. The Examiner does not see where real time or not real time in the claim or Farabet as a restricting factor and would teach away from the inventions, especially when determining whether the arts are analogous or obvious for the two to combine when talking about how to select a run for a simulation.
Therefore, the Examiner does not agree with applicant’s arguments and maintains the 103 rejections.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4, 7-8, 11-12, 14, 17-18, 23-25 and 27-28 are rejected under 35 U.S.C. 103 as being unpatentable over Farabet et al. (US 20190303759 A1; hereinafter Farabet) and in view of Zapolsky et al. (US 20200042656 A1; hereinafter Zapolsky).
Regarding claim 1, Farabet discloses:
A method comprising:
receiving, by one or more processors (Fig. 1: GPU servers 108) of one or more server computing devices (Fig. 1: training sub-system 106), logged sensor data collected by a vehicle using a perception system ([0028] “One or more vehicles 102 may collect sensor data from one or more sensors of the vehicle(s) 102 in real-world (e.g., physical) environments”, [0030] “The sensor data collected by the sensors of the vehicle(s) 102..may be used by a training sub-system 106”), the one or more server computing devices being remote from the vehicle (see Fig. 1);
selecting, by the one or more processors of the one or more server computing devices, a run for the vehicle from the logged sensor data based on a point in time at which one or more objects of a particular type are detected during the selected run ([0138] “The long-range camera(s) 1198 may also be used for object detection and classification, as well as basic object tracking”, [0034] “The process 118 may include data ingestion of new driving data (e.g., sensor data) captured and/or generated by one or more vehicles 102 in real-world environments..The process 118 may include a training loop, whereby new data is generated by the vehicle(s) 102, used to train, test, verify, and/or validate one or more perception DNNs”), wherein the logged sensor data is collected for an environment along the selected run, the environment including the one or more objects in an area encompassing the selected run ([0028] “One or more vehicles 102 may collect sensor data from one or more sensors of the vehicle(s) 102 in real-world (e.g., physical) environments”);
constructing, by the one or more processors of the one or more server computing devices, environment data using the logged sensor data ([0030] “The sensor data collected by the sensors of the vehicle(s) 102..may be used by a training sub-system 106)” and software for autonomous driving ([0031] “The training sub-system 106 may train and/or test any number of machine learning models, including deep neural networks (DNNs), such as neural networks for performing operations associated with one or more layers of the autonomous driving software stack”);
running, by the one or more processors of the one or more server computing devices, a simulation of the selected run using the constructed environment data in order to generate simulated sensor data collected by a simulated sensor of a simulated perception system on a simulated vehicle moving through the constructed environment ([0060]-[0061] “The simulated environment may be generated using rasterization, ray-tracing, using DNNs such as generative adversarial networks (GANs), another rendering technique, and/or a combination thereof.. the simulation system 400A may use real-time ray-tracing..The ray-tracing may be used to simulate LIDAR sensor for accurate generation of LIDAR data”);
comparing, by the one or more processors of the one or more server computing devices, the logged sensor data for the selected run to the simulated sensor data ([0123] “the outputs may be tested using one or more KPI's”, [0107] “KPI evaluation component may evaluate the performance of the virtual object(s)”, [0052] “virtual sensor of each virtual object”); and
evaluating, by the one or more processors of the one or more server computing devices, performance of the simulated sensor based on the comparison ([0123] “to determine the accuracy and effectiveness of the trained DNNs in any of a number of scenarios and environments”).
Farabet does not specifically disclose:
selecting, by the one or more processors of the one or more server computing devices, a run for the vehicle from the logged sensor data based on a point in time at which one or more objects of a particular type are initially detected during the selected run.
However, Zapolsky discloses:
selecting, by the one or more processors of the one or more server computing devices, a run for the vehicle from the logged sensor data based on a point in time at which one or more objects of a particular type are initially detected during the selected run ([0006] “when the system initially detects the object and generates the simulation to include the object and predictions of a subsequent state of the object, the system can then further extrapolate the prediction to maintain awareness of the object even when the object is not necessarily perceived in a subsequent acquisition of information”).
Farabet and Zapolsky are considered to be analogous to the claimed invention because they are in the same field of simulation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Farabet’s simulation to further incorporate Zapolsky’s simulation for the advantage of simulating a logged object as soon as it is initially detected in real time to simulate at time steps that are faster than real-time which results in to provide for anticipating motion and interactions with the object within the surrounding environment (Zapolsky’s [0005]).
Regarding claim 2, Farabet does not specifically disclose:
further comprising selecting, by the one or more processors of the one or more server computing devices, a time frame for the selected run.
However, Zapolsky discloses:
further comprising selecting, by the one or more processors of the one or more server computing devices, a time frame for the selected run ([0005] “real-time”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Farabet’s simulation to further incorporate Zapolsky’s simulation for the advantage of simulating a logged object as soon as it is initially detected in real time to simulate at time steps that are faster than real-time which results in to provide for anticipating motion and interactions with the object within the surrounding environment (Zapolsky’s [0005]).
Regarding claim 4, Farabet discloses:
wherein the constructed environment data includes regenerated mesh points ([0094] meshes may be employed..in the simulation).
Regarding claim 7, Farabet discloses:
wherein the running of the simulation includes retracing rays transmitted from the simulated sensor and recomputing intensities of the rays off points in the constructed environment data in order to generate the simulated sensor data according to configuration characteristics of the simulated sensor ([0097] “cast virtual rays toward the tracked objects. When a significant number of rays strike a tracked object, that object may be added to the report of the LIDAR data”, [0060]-[0061] “The simulated environment may be generated using rasterization, ray-tracing, using DNNs such as generative adversarial networks (GANs), another rendering technique, and/or a combination thereof.. the simulation system 400A may use real-time ray-tracing..The ray-tracing may be used to simulate LIDAR sensor for accurate generation of LIDAR data”).
Regarding claim 8, Farabet discloses:
wherein the running of the simulation includes modeling the simulated sensor based on configuration characteristics or operational settings of the simulated perception system (emulate at least cameras, LIDAR sensors, and/or RADAR sensors; [0097]).
Regarding claim 11, Farabet discloses:
A non-transitory, tangible computer-readable medium ([0027] memory) on which computer-readable instructions of a program are stored, the instructions, when executed by one or more processors (Fig. 1: GPU servers 108) of one or more server computing devices (Fig. 1: training sub-system 106), cause the one or more processors to perform a method ([0027] various functions may be carried out by a processor executing instructions stored in memory), the method comprising:
receiving logged sensor data collected by a vehicle using a perception system ([0028] “One or more vehicles 102 may collect sensor data from one or more sensors of the vehicle(s) 102 in real-world (e.g., physical) environments”, [0030] “The sensor data collected by the sensors of the vehicle(s) 102..may be used by a training sub-system 106”);
selecting a run for the vehicle from the logged sensor data based on a point in time when one or more objects of a particular type are detected during the selected run, wherein the logged sensor data is collected for an environment along the selected run ([0138] “The long-range camera(s) 1198 may also be used for object detection and classification, as well as basic object tracking”, [0034] “The process 118 may include data ingestion of new driving data (e.g., sensor data) captured and/or generated by one or more vehicles 102 in real-world environments..The process 118 may include a training loop, whereby new data is generated by the vehicle(s) 102, used to train, test, verify, and/or validate one or more perception DNNs”), the environment including the one or more objects in an area encompassing the selected run ([0028] “One or more vehicles 102 may collect sensor data from one or more sensors of the vehicle(s) 102 in real-world (e.g., physical) environments”);
constructing environment data for the selected run using the logged sensor data ([0030] “The sensor data collected by the sensors of the vehicle(s) 102..may be used by a training sub-system 106)” and software for autonomous driving ([0031] “The training sub-system 106 may train and/or test any number of machine learning models, including deep neural networks (DNNs), such as neural networks for performing operations associated with one or more layers of the autonomous driving software stack”);
running, using the software for autonomous driving, a simulation of the selected run using the constructed environment data in order to generate simulated sensor data collected by a simulated sensor of a simulated perception system on a simulated vehicle moving through the constructed environment ([0060]-[0061] “The simulated environment may be generated using rasterization, ray-tracing, using DNNs such as generative adversarial networks (GANs), another rendering technique, and/or a combination thereof.. the simulation system 400A may use real-time ray-tracing..The ray-tracing may be used to simulate LIDAR sensor for accurate generation of LIDAR data”);
comparing the logged sensor data for the selected run to the simulated sensor data for the selected run ([0123] “the outputs may be tested using one or more KPI's”, [0107] “KPI evaluation component may evaluate the performance of the virtual object(s)”, [0052] “virtual sensor of each virtual object”); and
evaluating the simulated sensor based on the comparison ([0123] “to determine the accuracy and effectiveness of the trained DNNs in any of a number of scenarios and environments”).
Farabet does not specifically disclose:
selecting a run for the vehicle from the logged sensor data based on a point in time when one or more objects of a particular type are initially detected during the selected run.
However, Zapolsky discloses:
selecting a run for the vehicle from the logged sensor data based on a point in time when one or more objects of a particular type are initially detected during the selected run ([0006] “when the system initially detects the object and generates the simulation to include the object and predictions of a subsequent state of the object, the system can then further extrapolate the prediction to maintain awareness of the object even when the object is not necessarily perceived in a subsequent acquisition of information”).
Farabet and Zapolsky are considered to be analogous to the claimed invention because they are in the same field of simulation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Farabet’s simulation to further incorporate Zapolsky’s simulation for the advantage of simulating a logged object as soon as it is initially detected in real time to simulate at time steps that are faster than real-time which results in to provide for anticipating motion and interactions with the object within the surrounding environment (Zapolsky’s [0005]).
Regarding claim 12, Farabet does not specifically disclose:
wherein the method further comprises selecting a time frame for the selected run.
However, Zapolsky discloses:
wherein the method further comprises selecting a time frame for the selected run ([0005] “real-time”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Farabet’s simulation to further incorporate Zapolsky’s simulation for the advantage of simulating a logged object as soon as it is initially detected in real time to simulate at time steps that are faster than real-time which results in to provide for anticipating motion and interactions with the object within the surrounding environment (Zapolsky’s [0005]).
Regarding claim 14, Farabet discloses:
wherein the constructed environment data includes regenerated mesh points ([0094] meshes may be employed..in the simulation).
Regarding claim 17, Farabet discloses:
wherein the running of the simulation includes retracing rays transmitted from the simulated sensor and recomputing intensities of the rays off points in the constructed environment data ([0097] “cast virtual rays toward the tracked objects. When a significant number of rays strike a tracked object, that object may be added to the report of the LIDAR data”, [0060]-[0061] “The simulated environment may be generated using rasterization, ray-tracing, using DNNs such as generative adversarial networks (GANs), another rendering technique, and/or a combination thereof.. the simulation system 400A may use real-time ray-tracing..The ray-tracing may be used to simulate LIDAR sensor for accurate generation of LIDAR data”).
Regarding claim 18, Farabet discloses:
wherein the running of the simulation includes modeling the simulated sensor based on configuration characteristics or operational settings of the simulated perception system (emulate at least cameras, LIDAR sensors, and/or RADAR sensors; [0097]).
Regarding claim 23, Farabet discloses:
wherein the particular type of the one or more objects is one of a pedestrian ([0031] pedestrians), a cyclist ([0138] bicycles), a motorcycle, foliage ([0061] tree), or a sidewalk.
Regarding claim 24, Farabet discloses:
wherein the particular type of the one or more objects is one of a pedestrian ([0031] pedestrians), a cyclist ([0138] bicycles), a motorcycle, foliage ([0061] tree), or a sidewalk.
Regarding claim 25, Farabet discloses:
A system (Fig. 1: system 100) comprising one or more server computing devices (Fig. 1: training sub-system 106), each having one or more processors (Fig. 1: GPU servers 108), the one or more server computing devices being configured to:
receive logged sensor data collected by a vehicle using a perception system ([0028] “One or more vehicles 102 may collect sensor data from one or more sensors of the vehicle(s) 102 in real-world (e.g., physical) environments”, [0030] “The sensor data collected by the sensors of the vehicle(s) 102..may be used by a training sub-system 106”), the one or more server computing devices being remote from the vehicle (see Fig. 1);
select a run for the vehicle from the logged sensor data based on a point in time at which one or more objects of a particular type are detected during the selected run, wherein the logged sensor data is collected for an environment along the selected run ([0138] “The long-range camera(s) 1198 may also be used for object detection and classification, as well as basic object tracking”, [0034] “The process 118 may include data ingestion of new driving data (e.g., sensor data) captured and/or generated by one or more vehicles 102 in real-world environments..The process 118 may include a training loop, whereby new data is generated by the vehicle(s) 102, used to train, test, verify, and/or validate one or more perception DNNs”), the environment including the one or more objects in an area encompassing the selected run ([0028] “One or more vehicles 102 may collect sensor data from one or more sensors of the vehicle(s) 102 in real-world (e.g., physical) environments”);
construct environment data using the logged sensor data ([0030] “The sensor data collected by the sensors of the vehicle(s) 102..may be used by a training sub-system 106)” and software for autonomous driving ([0031] “The training sub-system 106 may train and/or test any number of machine learning models, including deep neural networks (DNNs), such as neural networks for performing operations associated with one or more layers of the autonomous driving software stack”);
run a simulation of the selected run using the constructed environment data in order to generate simulated sensor data collected by a simulated sensor of a simulated perception system on a simulated vehicle moving through the constructed environment ([0060]-[0061] “The simulated environment may be generated using rasterization, ray-tracing, using DNNs such as generative adversarial networks (GANs), another rendering technique, and/or a combination thereof.. the simulation system 400A may use real-time ray-tracing..The ray-tracing may be used to simulate LIDAR sensor for accurate generation of LIDAR data”);
compare the logged sensor data for the selected run to the simulated sensor data ([0123] “the outputs may be tested using one or more KPI's”, [0107] “KPI evaluation component may evaluate the performance of the virtual object(s)”, [0052] “virtual sensor of each virtual object”); and
evaluate performance of the simulated sensor based on the comparison ([0123] “to determine the accuracy and effectiveness of the trained DNNs in any of a number of scenarios and environments”).
Farabet does not specifically disclose:
select a run for the vehicle from the logged sensor data based on a point in time at which one or more objects of a particular type are initially detected during the selected run.
However, Zapolsky discloses:
select a run for the vehicle from the logged sensor data based on a point in time at which one or more objects of a particular type are initially detected during the selected run ([0006] “when the system initially detects the object and generates the simulation to include the object and predictions of a subsequent state of the object, the system can then further extrapolate the prediction to maintain awareness of the object even when the object is not necessarily perceived in a subsequent acquisition of information”).
Farabet and Zapolsky are considered to be analogous to the claimed invention because they are in the same field of simulation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Farabet’s simulation to further incorporate Zapolsky’s simulation for the advantage of simulating a logged object as soon as it is initially detected in real time to simulate at time steps that are faster than real-time which results in to provide for anticipating motion and interactions with the object within the surrounding environment (Zapolsky’s [0005]).
Regarding claim 27, Farabet discloses:
wherein the running of the simulation includes retracing rays transmitted from the simulated sensor and recomputing intensities of the rays off points in the constructed environment data in order to generate the simulated sensor data according to configuration characteristics of the simulated sensor ([0097] “cast virtual rays toward the tracked objects. When a significant number of rays strike a tracked object, that object may be added to the report of the LIDAR data”, [0060]-[0061] “The simulated environment may be generated using rasterization, ray-tracing, using DNNs such as generative adversarial networks (GANs), another rendering technique, and/or a combination thereof.. the simulation system 400A may use real-time ray-tracing..The ray-tracing may be used to simulate LIDAR sensor for accurate generation of LIDAR data”).
Regarding claim 28, Farabet discloses:
wherein the running of the simulation includes modeling the simulated sensor based on configuration characteristics or operational settings of the simulated perception system (emulate at least cameras, LIDAR sensors, and/or RADAR sensors; [0097]).
Claims 5, 15, 21-22 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Farabet, in view of Zapolsky and in view of Liang et al. (US 20220036579 A1; hereinafter Liang).
Regarding claim 5, Farabet and Zapolsky do not specifically disclose:
further comprising, extracting details associated with at least one of the point in time at which the one or more objects are detected or a location of the one or more objects from a scaled mesh representing each of the one or more objects in the simulated environment, wherein the extracted details are used to evaluate the simulated sensor.
However, Liang discloses:
further comprising, extracting details associated with at least one of the point in time at which the one or more objects are detected or a location of the one or more objects from a scaled mesh representing each of the one or more objects in the simulated environment ([0010], [0013], [0090] generate a three-dimensional mesh representation of the dynamic object within a simulated environment, simulation data can include a plurality of sequences of object model parameters, each sequence of object model parameters can be indicative of a trajectory of a respective dynamic object over time), wherein the extracted details are used to evaluate the simulated sensor ([0035] “Each realistic three-dimensional pedestrian sequence can be generated based on three-dimensional data (e.g., sparse point cloud(s), etc.)) and two-dimensional data (e.g., image frame(s), etc.) captured by one or more sensor(s) of a robotic platform..The model parameters can be evaluated by an objective function”).
Liang is considered to be analogous to the claimed invention because it pertains to the same field of simulation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Farabet’s simulation as modified to further incorporate Liang’s simulation for the advantage of generating mesh representation of the dynamic object which results in measuring consistency between the three-dimensional mesh representation of the dynamic object and the three-dimensional data.
Regarding claim 15, Farabet and Zapolsky do not specifically disclose:
further comprising extracting details associated with at least one of the point in time at which the one or more objects are detected or a location of the one or more objects from a scaled mesh representing each of the one or more objects in the simulated environment, wherein the extracted details are used to evaluate the simulated sensor.
However, Liang discloses:
further comprising extracting details associated with at least one of the point in time at which the one or more objects are detected or a location of the one or more objects from a scaled mesh representing each of the one or more objects in the simulated environment ([0010], [0013], [0090] generate a three-dimensional mesh representation of the dynamic object within a simulated environment, simulation data can include a plurality of sequences of object model parameters, each sequence of object model parameters can be indicative of a trajectory of a respective dynamic object over time), wherein the extracted details are used to evaluate the simulated sensor ([0035] “Each realistic three-dimensional pedestrian sequence can be generated based on three-dimensional data (e.g., sparse point cloud(s), etc.)) and two-dimensional data (e.g., image frame(s), etc.) captured by one or more sensor(s) of a robotic platform..The model parameters can be evaluated by an objective function”).
Liang is considered to be analogous to the claimed invention because it pertains to the same field of simulation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Farabet’s simulation as modified to further incorporate Liang’s simulation for the advantage of generating mesh representation of the dynamic object which results in measuring consistency between the three-dimensional mesh representation of the dynamic object and the three-dimensional data.
Regarding claim 21, Farabet and Zapolsky do not specifically disclose:
wherein the details extracted from the scaled mesh further include a shape of the one or more objects.
However, Liang discloses:
wherein the details extracted from the scaled mesh further include a shape of the one or more objects ([0010], [0013], [0035] shape of a three-dimensional mesh representation of the dynamic object within a simulated environment based, at least in part, on the plurality of object model parameters).
Liang is considered to be analogous to the claimed invention because it pertains to the same field of simulation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Farabet’s simulation as modified to further incorporate Liang’s simulation for the advantage of generating mesh representation of the dynamic object which results in measuring consistency between the three-dimensional mesh representation of the dynamic object and the three-dimensional data.
Regarding claim 22, Farabet and Zapolsky do not specifically disclose:
wherein the details extracted from the scaled mesh further include a shape of the one or more objects.
However, Liang discloses:
wherein the details extracted from the scaled mesh further include a shape of the one or more objects ([0010], [0013], [0035] shape of a three-dimensional mesh representation of the dynamic object within a simulated environment based, at least in part, on the plurality of object model parameters).
Liang is considered to be analogous to the claimed invention because it pertains to the same field of simulation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Farabet’s simulation as modified to further incorporate Liang’s simulation for the advantage of generating mesh representation of the dynamic object which results in measuring consistency between the three-dimensional mesh representation of the dynamic object and the three-dimensional data.
Regarding claim 26, Farabet and Zapolsky do not specifically disclose:
wherein the one or more server computing devices are further configured to extract details associated with at least one of the point in time at which the one or more objects are detected or a location of the one or more objects from a scaled mesh representing each of the one or more objects in the simulated environment, wherein the extracted details are used to evaluate the simulated sensor.
However, Liang discloses:
wherein the one or more server computing devices are further configured to extract details associated with at least one of the point in time at which the one or more objects are detected or a location of the one or more objects from a scaled mesh representing each of the one or more objects in the simulated environment ([0010], [0013], [0090] generate a three-dimensional mesh representation of the dynamic object within a simulated environment, simulation data can include a plurality of sequences of object model parameters, each sequence of object model parameters can be indicative of a trajectory of a respective dynamic object over time), wherein the extracted details are used to evaluate the simulated sensor ([0035] “Each realistic three-dimensional pedestrian sequence can be generated based on three-dimensional data (e.g., sparse point cloud(s), etc.)) and two-dimensional data (e.g., image frame(s), etc.) captured by one or more sensor(s) of a robotic platform..The model parameters can be evaluated by an objective function”).
Liang is considered to be analogous to the claimed invention because it pertains to the same field of simulation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Farabet’s simulation as modified to further incorporate Liang’s simulation for the advantage of generating mesh representation of the dynamic object which results in measuring consistency between the three-dimensional mesh representation of the dynamic object and the three-dimensional data.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAYSUN WU whose telephone number is (571)272-1528. The examiner can normally be reached Monday-Friday 8AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hunter Lonsberry can be reached on (571)272-7298. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PAYSUN WU/Examiner, Art Unit 3665
/DONALD J WALLACE/Primary Examiner, Art Unit 3665