DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Independent claims 1, 8, and 15 have been amended.
Claims 2, 9, 16, and 20 has been cancelled.
There are no new claims.
Claims 1, 3-8, 10-15, 17-19, and 21 are currently pending.
The official correspondence below is a non-final.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
CLAIM 8 (CLAIMS 1 AND 15 ARE PARALLEL IN SCOPE AND SPIRIT) IS REJECTED under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
101 Analysis – Step 1
Claim 8 is directed to a method (i.e., a process). Therefore, claim 8 is within at least one of the four statutory categories.
101 Analysis – Step 2A, Prong I
Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes.
Independent claim 8 includes limitations that recite an abstract idea (emphasized below in bold text) and will be used as a representative claim for the remainder of the 101 rejection.
Claim 8 Recites:
(Currently Amended) A computer-implemented method, comprising:
receiving sensor data;
providing the sensor data to a plurality of validation modules and a plurality of perception modules
wherein a first validation module among the plurality of validation modules has a first network architecture, wherein a second validation module among the plurality of validation modules has a second network architecture, wherein the first network architecture is distinct from the second network architecture, and wherein each of the plurality of perception modules generates a perception output;
receiving, from the plurality of validation modules, a first set of perception outputs;
receiving, from the plurality of perception modules, a second set of perception outputs; and
employing a consensus algorithm for determining a ground-truth perception output by evaluating the first and second set of perception outputs from the plurality of validation modules and the plurality of perception modules, respectively, wherein the consensus algorithm comprises a weighted voting consensus approach whereby a weight of a given perception output is based on an amount of time that a corresponding validation module of the plurality of validation modules has been deployed, an historic accuracy of the corresponding validation module of the plurality of validation modules, and versioning information associated with the corresponding validation module of the plurality of validation modules.
The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “evaluating” in the context of this claim encompasses a person outputs and forming a simple judgement. Wherein, the observing entails observing the time a device or software has had in-service and the device or software historical accuracy for determining credibility. The examiner respectfully submits, determining a credibility based upon historical accuracy is well within the limitations of the human mind. Accordingly, the claim recites at least one abstract idea.
101 Analysis – Step 2A, Prong II
Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”):
Claim 8 Recites:
(Currently Amended) A computer-implemented method, comprising:
receiving sensor data;
providing the sensor data to a plurality of validation modules and a plurality of perception modules
wherein a first validation module among the plurality of validation modules has a first network architecture, wherein a second validation module among the plurality of validation modules has a second network architecture, wherein the first network architecture is distinct from the second network architecture, and wherein each of the plurality of perception modules generates a perception output;
receiving, from the plurality of validation modules, a first set of perception outputs;
receiving, from the plurality of perception modules, a second set of perception outputs; and
employing a consensus algorithm for determining a ground-truth perception output by evaluating the first and second set of perception outputs from the plurality of validation modules and the plurality of perception modules, respectively, wherein the consensus algorithm comprises a weighted voting consensus approach whereby a weight of a given perception output is based on an amount of time that a corresponding validation module of the plurality of validation modules has been deployed, an historic accuracy of the corresponding validation module of the plurality of validation modules, and versioning information associated with the corresponding validation module of the plurality of validation modules.
Regarding the additional limitations of “receive”, “provide”, “receive”, and “receive” the examiner submits that these limitations are insignificant extra-solution, pre-solution, activities that use a computer to perform the process of transferring data. In particular, the receiving from sensors, and amounts to mere data gathering, which is a form of insignificant extra-solution activity. The “providing” is a transfer of data, which is a form of insignificant extra-solution activity. Lastly, the “receiving” and “receiving” are additional recitations of data-in-data-out at a general, high level. Wherein the network structure describes how to generally “apply” the otherwise mental judgements in a generic or general purpose vehicle control environment and merely automates the components of the evaluating step. Lastly, “versioning”, under the proudest reasonable interpretation, describes creating new, overwriting/updating, tracking different iterations, or otherwise managing changes to data. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. To the examiner’s best understanding, the claim as a whole has a determination within the limitations of the human mind and movement/management of data, wherein the mental process is not applied to any solution, control, or command.
Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
101 Analysis – Step 2B
Regarding Step 2B of the 2019 PEG, representative independent claim 8 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements describe data acquisition and data management and amounts to nothing more than applying the exception using a generic computer component (module 1 and 2). Generally applying an exception using a generic computer component cannot provide an inventive concept. And as discussed above, the additional limitations, these limitations are insignificant extra-solution activities. Further, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B to determine if they are more than what is wellunderstood, routine, conventional activity in the field. The additional limitations of “receiving” and “versioning” are well-understood, routine, and conventional activities because the background recites that the sensors are all conventional sensors and the specification does not provide any indication that the modules are anything other than conventional processors. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner. The additional limitation of “versioning (managing)” is a well-understood, routine, and conventional activity because the Federal Circuit in Trading Techs. Int’l v. IBG LLC, 921 F.3d 1084, 1093 (Fed. Cir. 2019), and Intellectual Ventures I LLC v. Erie Indemnity Co., 850 F.3d 1315, 1331 (Fed. Cir. 2017), for example, indicated that the mere displaying of data is a well understood, routine, and conventional function. Hence, the claim is not patent eligible.
Dependent claim(s) 10-14 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application [provide concise explanation]. Therefore, dependent claims 10-14 are not patent eligible under the same rationale as provided for in the rejection of claim 8.
Therefore, claim(s) 8, and 10-14 is/are ineligible under 35 USC §101. Claims 1 and 15 are parallel in scope and sprit and contain the same/similar limitations.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 3-8, 10-15, 17-19, and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Crego (US 11741274 B1) in view of Elli (US 20220114458 A1), in further view of Hyde (US 20210300425 A1) and Capell (US 12204823 B1).
REGARDING CLAIM 1, Crego discloses, at least one memory (Crego: [FIG. 2(214)(224)]); and at least one processor coupled to the at least one memory (Crego: [FIG. 2(214)(224)]), the at least one processor configured to: receive sensor data (Crego: the perception component may comprise hardware and/or software for receiving sensor data from one or more sensors of an autonomous vehicle and detecting one or more objects in an environment associated with the autonomous vehicle and/or characteristics associated with the one or more objects (Col. 2, Ln. 51-56)); determine a ground-truth perception output (Crego: Ground truth data may comprise manually labeled, automatically-labeled (e.g., via a machine-learned model pipeline trained for the task of ground truth generation) (Col. 8, Ln. 34-36)).
Crego does not explicitly disclose, a dual system.
However, in the same field of endeavor, Elli discloses, provide the sensor data to a plurality of validation modules (Elli: [0063] A first EE module 406a of the first perception task module 402a may generate a first estimation 409a. The first estimation 409a may include a first error distribution for the first task data. The first EE module 406a may estimate the first error distribution for the first perception task and the first sensor data; [0065] A second EE module 406b of the second perception task module 402b may generate a second estimation 409b. The second estimation 409b may include a second error distribution for the second task data. The second EE module 406b may estimate the second error distribution for the second perception task and the second sensor data; [0079] The NELR module 604 may be trained by comparing the pre-identified latent representations 601 to the ground truth labels; [FIG. 4 (406a)(406b)]) and a plurality of perception modules (Crego: [FIG. 4(402a)(402b)]), wherein a first validation module among the plurality of validation modules has a first network architecture (Elli: [FIG. 4]), wherein a second validation module among the plurality of validation modules has a second network architecture (Elli: [FIG. 4]), wherein the first network architecture is distinct from the second network architecture (Elli: [FIG. 4]), and wherein each of the plurality of perception modules generates a perception output (Elli: [FIG. 4(409a)(409b)]); receive, from the plurality of validation modules, a first set of perception outputs (Elli: [FIG. 4(409a)(409b)]); receive, from the plurality of perception modules, a second set of perception outputs (Elli: [FIG. 4(409a)(409b)]); and employ a consensus algorithm (Elli: [0097] The NELR module may include a ML algorithm or an AI algorithm. The ML algorithm or the AI algorithm may be selected based on the type of sensor data and the ground truth labels in the DSD. For example, if the sensor data includes RGB data, the ML algorithm of the NELR module may include a convolutional neural network (CNN). The CNN may be trained to include a general loss function for regression (e.g., a mean squared error), an optimizer (e.g. a stochastic gradient descent or ADAM)), or some combination thereof; [0019] include closed solutions in probability density function) evaluating the first and second set of perception outputs (Elli: [0066] A safety monitor 416 may receive the first environmental model 412a, the second environmental model 412b, the fused environment model 414, the first estimation 409a, the second estimation 409b, or some combination thereof. The safety monitor 416 may identify safety issues for operation of the vehicle within the environment based on the first environmental model 412a, the second environmental model 412b, the fused environment model 414, the first estimation 409a, the second estimation 409b, or some combination thereof) from the plurality of validation modules (Elli: [FIG. 4(402a)(402b)]) and the plurality of perception modules, respectively (Elli: [FIG. 4 (406a)(406b)]), for the benefit of multimodal automatic mapping of sensing defects to task-specific (examiner: first (camera), second (radar/lidar)) error measurements.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus disclosed by Crego to include two modules taught by Elli. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to multimodal automatic mapping of sensing defects to task-specific (examiner: first (camera), second (examiner: radar/lidar)) error measurements.
Crego, as modified, does not explicitly disclose, the consensus algorithm comprises a weighted voting consensus approach an historic accuracy of the corresponding validation module of the plurality of validation modules, and versioning information associated with the corresponding validation module of the plurality of validation modules.
However, in the same field of endeavor, Hyde discloses, the consensus algorithm comprises a weighted voting consensus approach (Hyde: [0055] monitoring circuitry can assign a certain weight to the first output … and a fourth functional circuitry generated a fourth output using a deterministic algorithm, the monitoring circuitry can weigh the consistency of the fourth output more heavily) an historic accuracy of the corresponding validation module of the plurality of validation modules (Hyde: [0053] if a monitoring circuit receives five outputs where the first three outputs do not recognize an object in an environment and the last two outputs do recognize an object in the environment, the monitoring circuit can still find a sufficient level of consistency between the results, as the consistency of the last two outputs can be weighed more heavily as they are more temporally relevant than the first three outputs. As such, the temporal recency of the outputs can be considered and utilized in the weighting of consistency between outputs by the monitoring circuit; [0152] the consistency of the last two outputs (e.g., output data 810D) can be weighed more heavily as they are more temporally relevant than the first two outputs (e.g., output data 810A-810B). As such, the temporal recency of the outputs can be considered and utilized in the weighting of consistency between outputs by the monitoring circuitry ... [0154] the monitoring circuitry 812 can weigh the consistency of various outputs based on the algorithm), and versioning information associated with the corresponding validation module of the plurality of validation modules (Hyde: [0048] The world state can describe a perception of the environment external to the autonomous vehicle. The second functional circuitry generate a first output validation for the first output in the same manner; [0088] the perception system 124 can update the state data 130 for each object at each iteration), for the benefit of determining a threshold level of difference (consensus) to compute a proper vehicle response.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus disclosed by a modified Crego to include consensus validation taught by Hyde. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to determine a threshold level of difference (consensus) to compute a proper vehicle response.
Crego, as modified, does not explicitly disclose, a weight of a given perception output is based on an amount of time that a corresponding validation module of the plurality of validation modules has been deployed.
However, in the same fields of endeavor, Capell discloses, a weight of a given perception output is based on an amount of time that a corresponding validation module of the plurality of validation modules has been deployed (Capell: The simulation log may be stored in the database of simulation data 212 storing a historical log of simulation runs indexed by corresponding run ID and/or batch ID ... the simulation result and/or a simulation log may be used as training data for machine learning engine (Col. 11, Ln. 62-66); the machine learning engine 166 may compare the predicted machine learning model output with a machine learning model known output (e.g., simulated output in the simulation scenario) from the training instance and, using the comparison, update one or more weights in the machine learning model 224 ... one or more weights may be updated by backpropagating the difference over the entire machine learning model (Col. 13, Ln. 20-28)), for the benefit of creating a perception validation scenario to create or refine a perception model used for controlling the operation of autonomous vehicles.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus disclosed by a modified Crego to include determining a threshold level of difference (consensus) to compute a proper vehicle response taught by Capell. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to create a perception validation scenario to create or refine a perception model used for controlling the operation of autonomous vehicles.
REGARDING CLAIM 3, Crego, as modified, remains as applied above to claim 1, and further, Crego, as modified, also discloses each of the plurality of perception modules comprises a deep-learning neural network (Crego: Col. 18, Ln. 18-20).
REGARDING CLAIM 4, Crego, as modified, remains as applied above to claim 1, and further, Crego, as modified, also discloses the sensor data is collected using one or more autonomous vehicle (AV) mounted sensors (Crego: Col. 26, Ln. 35-39).
REGARDING CLAIM 5, Crego, as modified, remains as applied above to claim 1, and further, Crego, as modified, also discloses, each of the perception modules comprises a machine-learning model that has been trained on different training data (Crego: Col. 8, Ln. 34-36).
REGARDING CLAIM 6, Crego, as modified, remains as applied above to claim 1, and further, Crego, as modified, also discloses each of the perception modules comprises a machine-learning model that has been trained using a different training paradigm (Elli: [0080]; [0166]).
REGARDING CLAIM 7, Crego, as modified, remains as applied above to claim 1, and further, Crego, as modified, also discloses the sensor data comprises: camera data, Light Detection and Ranging (LiDAR) data, radar data, or a combination thereof (Crego: Col. 11, Ln. 21-25).
REGARDING CLAIM 8, Crego discloses, receive sensor data (Crego: (Col. 2, Ln. 51-56)); to determine a ground-truth perception output (Crego: (Col. 8, Ln. 34-36)).
Crego does not explicitly disclose, a dual system.
However, in the same field of endeavor, Elli discloses, provide the sensor data to a plurality of validation modules (Elli: [0063]; [0065]; [0079]; [FIG. 4 (406a)(406b)]) and a plurality of perception modules (Crego: [FIG. 4(402a)(402b)]), wherein a first validation module among the plurality of validation modules has a first network architecture (Elli: [FIG. 4]), wherein a second validation module among the plurality of validation modules has a second network architecture (Elli: [FIG. 4]), wherein the first network architecture is distinct from the second network architecture (Elli: [FIG. 4]), and wherein each of the plurality of perception modules generates a perception output (Elli: [FIG. 4(409a)(409b)]); receive, from the plurality of validation modules, a first set of perception outputs (Elli: [FIG. 4(409a)(409b)]); receive, from the plurality of perception modules, a second set of perception outputs (Elli: [FIG. 4(409a)(409b)]); and employ a consensus algorithm (Elli: [0097]; [0019]) evaluating the first and second set of perception outputs (Elli: [0066]) from the plurality of validation modules (Elli: [FIG. 4(402a)(402b)]) and the plurality of perception modules, respectively (Elli: [FIG. 4 (406a)(406b)]), for the benefit of multimodal automatic mapping of sensing defects to task-specific (examiner: first (camera), second (radar/lidar)) error measurements.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus disclosed by Crego to include two modules taught by Elli. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to multimodal automatic mapping of sensing defects to task-specific (examiner: first (camera), second (examiner: radar/lidar)) error measurements.
Crego, as modified, does not explicitly disclose, the consensus algorithm comprises a weighted voting consensus approach an historic accuracy of the corresponding validation module of the plurality of validation modules, and versioning information associated with the corresponding validation module of the plurality of validation modules.
However, in the same field of endeavor, Hyde discloses, the consensus algorithm comprises a weighted voting consensus approach (Hyde: [0055] monitoring circuitry can assign a certain weight to the first output … and a fourth functional circuitry generated a fourth output using a deterministic algorithm, the monitoring circuitry can weigh the consistency of the fourth output more heavily) an historic accuracy of the corresponding validation module of the plurality of validation modules (Hyde: [0053] if a monitoring circuit receives five outputs where the first three outputs do not recognize an object in an environment and the last two outputs do recognize an object in the environment, the monitoring circuit can still find a sufficient level of consistency between the results, as the consistency of the last two outputs can be weighed more heavily as they are more temporally relevant than the first three outputs. As such, the temporal recency of the outputs can be considered and utilized in the weighting of consistency between outputs by the monitoring circuit; [0152] the consistency of the last two outputs (e.g., output data 810D) can be weighed more heavily as they are more temporally relevant than the first two outputs (e.g., output data 810A-810B). As such, the temporal recency of the outputs can be considered and utilized in the weighting of consistency between outputs by the monitoring circuitry ... [0154] the monitoring circuitry 812 can weigh the consistency of various outputs based on the algorithm), and versioning information associated with the corresponding validation module of the plurality of validation modules (Hyde: [0048] The world state can describe a perception of the environment external to the autonomous vehicle. The second functional circuitry generate a first output validation for the first output in the same manner; [0088] the perception system 124 can update the state data 130 for each object at each iteration), for the benefit of determining a threshold level of difference (consensus) to compute a proper vehicle response.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus disclosed by a modified Crego to include consensus validation taught by Hyde. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to determine a threshold level of difference (consensus) to compute a proper vehicle response.
Crego, as modified, does not explicitly disclose, a weight of a given perception output is based on an amount of time that a corresponding validation module of the plurality of validation modules has been deployed.
However, in the same fields of endeavor, Capell discloses, a weight of a given perception output is based on an amount of time that a corresponding validation module of the plurality of validation modules has been deployed (Capell: The simulation log may be stored in the database of simulation data 212 storing a historical log of simulation runs indexed by corresponding run ID and/or batch ID ... the simulation result and/or a simulation log may be used as training data for machine learning engine (Col. 11, Ln. 62-66); the machine learning engine 166 may compare the predicted machine learning model output with a machine learning model known output (e.g., simulated output in the simulation scenario) from the training instance and, using the comparison, update one or more weights in the machine learning model 224 ... one or more weights may be updated by backpropagating the difference over the entire machine learning model (Col. 13, Ln. 20-28)), for the benefit of creating a perception validation scenario to create or refine a perception model used for controlling the operation of autonomous vehicles.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus disclosed by a modified Crego to include determining a threshold level of difference (consensus) to compute a proper vehicle response taught by Capell. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to create a perception validation scenario to create or refine a perception model used for controlling the operation of autonomous vehicles.
REGARDING CLAIM 10, Crego, as modified, remains as applied above to claim 8, and further, Crego, as modified, also discloses each of the plurality of perception modules comprises a deep-learning neural network (Crego: Col. 18, Ln. 18-20).
REGARDING CLAIM 11, Crego, as modified, remains as applied above to claim 8, and further, Crego, as modified, also discloses the sensor data is collected using one or more autonomous vehicle (AV) mounted sensors (Crego: Col. 26, Ln. 35-39).
REGARDING CLAIM 12, Crego, as modified, remains as applied above to claim 8, and further, Crego, as modified, also discloses, each of the perception modules comprises a machine-learning model that has been trained on different training data (Crego: Col. 8, Ln. 34-36).
REGARDING CLAIM 13, Crego, as modified, remains as applied above to claim 8, and further, Crego, as modified, also discloses each of the perception modules comprises a machine-learning model that has been trained using a different training paradigm (Elli: [0080]; [0166]).
REGARDING CLAIM 14, Crego, as modified, remains as applied above to claim 8, and further, Crego, as modified, also discloses the sensor data comprises: camera data, Light Detection and Ranging (LiDAR) data, radar data, or a combination thereof (Crego: Col. 11, Ln. 21-25).
REGARDING CLAIM 15, Crego discloses, receive sensor data (Crego: (Col. 2, Ln. 51-56)); to determine a ground-truth perception output (Crego: (Col. 8, Ln. 34-36)).
Crego does not explicitly disclose, a dual system.
However, in the same field of endeavor, Elli discloses, provide the sensor data to a plurality of validation modules (Elli: [0063]; [0065]; [0079]; [FIG. 4 (406a)(406b)]) and a plurality of perception modules (Crego: [FIG. 4(402a)(402b)]), wherein a first validation module among the plurality of validation modules has a first network architecture (Elli: [FIG. 4]), wherein a second validation module among the plurality of validation modules has a second network architecture (Elli: [FIG. 4]), wherein the first network architecture is distinct from the second network architecture (Elli: [FIG. 4]), and wherein each of the plurality of perception modules generates a perception output (Elli: [FIG. 4(409a)(409b)]); receive, from the plurality of validation modules, a first set of perception outputs (Elli: [FIG. 4(409a)(409b)]); receive, from the plurality of perception modules, a second set of perception outputs (Elli: [FIG. 4(409a)(409b)]); and employ a consensus algorithm (Elli: [0097]; [0019]) evaluating the first and second set of perception outputs (Elli: [0066]) from the plurality of validation modules (Elli: [FIG. 4(402a)(402b)]) and the plurality of perception modules, respectively (Elli: [FIG. 4 (406a)(406b)]), for the benefit of multimodal automatic mapping of sensing defects to task-specific (examiner: first (camera), second (radar/lidar)) error measurements.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus disclosed by Crego to include two modules taught by Elli. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to multimodal automatic mapping of sensing defects to task-specific (examiner: first (camera), second (examiner: radar/lidar)) error measurements.
Crego, as modified, does not explicitly disclose, the consensus algorithm comprises a weighted voting consensus approach an historic accuracy of the corresponding validation module of the plurality of validation modules, and versioning information associated with the corresponding validation module of the plurality of validation modules.
However, in the same field of endeavor, Hyde discloses, the consensus algorithm comprises a weighted voting consensus approach (Hyde: [0055] monitoring circuitry can assign a certain weight to the first output … and a fourth functional circuitry generated a fourth output using a deterministic algorithm, the monitoring circuitry can weigh the consistency of the fourth output more heavily) an historic accuracy of the corresponding validation module of the plurality of validation modules (Hyde: [0053] if a monitoring circuit receives five outputs where the first three outputs do not recognize an object in an environment and the last two outputs do recognize an object in the environment, the monitoring circuit can still find a sufficient level of consistency between the results, as the consistency of the last two outputs can be weighed more heavily as they are more temporally relevant than the first three outputs. As such, the temporal recency of the outputs can be considered and utilized in the weighting of consistency between outputs by the monitoring circuit; [0152] the consistency of the last two outputs (e.g., output data 810D) can be weighed more heavily as they are more temporally relevant than the first two outputs (e.g., output data 810A-810B). As such, the temporal recency of the outputs can be considered and utilized in the weighting of consistency between outputs by the monitoring circuitry ... [0154] the monitoring circuitry 812 can weigh the consistency of various outputs based on the algorithm), and versioning information associated with the corresponding validation module of the plurality of validation modules (Hyde: [0048] The world state can describe a perception of the environment external to the autonomous vehicle. The second functional circuitry generate a first output validation for the first output in the same manner; [0088] the perception system 124 can update the state data 130 for each object at each iteration), for the benefit of determining a threshold level of difference (consensus) to compute a proper vehicle response.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus disclosed by a modified Crego to include consensus validation taught by Hyde. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to determine a threshold level of difference (consensus) to compute a proper vehicle response.
Crego, as modified, does not explicitly disclose, a weight of a given perception output is based on an amount of time that a corresponding validation module of the plurality of validation modules has been deployed.
However, in the same fields of endeavor, Capell discloses, a weight of a given perception output is based on an amount of time that a corresponding validation module of the plurality of validation modules has been deployed (Capell: The simulation log may be stored in the database of simulation data 212 storing a historical log of simulation runs indexed by corresponding run ID and/or batch ID ... the simulation result and/or a simulation log may be used as training data for machine learning engine (Col. 11, Ln. 62-66); the machine learning engine 166 may compare the predicted machine learning model output with a machine learning model known output (e.g., simulated output in the simulation scenario) from the training instance and, using the comparison, update one or more weights in the machine learning model 224 ... one or more weights may be updated by backpropagating the difference over the entire machine learning model (Col. 13, Ln. 20-28)), for the benefit of creating a perception validation scenario to create or refine a perception model used for controlling the operation of autonomous vehicles.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus disclosed by a modified Crego to include determining a threshold level of difference (consensus) to compute a proper vehicle response taught by Capell. One of ordinary skill in the art would have been motivated to make this modification, with a reasonable expectation of success, in order to create a perception validation scenario to create or refine a perception model used for controlling the operation of autonomous vehicles.
REGARDING CLAIM 17, Crego, as modified, remains as applied above to claim 15, and further, Crego, as modified, also discloses each of the plurality of perception modules comprises a deep-learning neural network (Crego: Col. 18, Ln. 18-20).
REGARDING CLAIM 18, Crego, as modified, remains as applied above to claim 15, and further, Crego, as modified, also discloses the sensor data is collected using one or more autonomous vehicle (AV) mounted sensors (Crego: Col. 26, Ln. 35-39).
REGARDING CLAIM 19, Crego, as modified, remains as applied above to claim 15, and further, Crego, as modified, also discloses, each of the perception modules comprises a machine-learning model that has been trained on different training data (Crego: Col. 8, Ln. 34-36).
REGARDING CLAIM 21, Crego, as modified, remains as applied above to claim 1, and further, Crego, as modified, also discloses, the first validation module was trained using first training data (Elli: [0057]; [0063-0064]), and wherein the second validation module was trained using second training data (Elli: [0057]; [0065]) that differs from the first training data (Elli: [0077]; [0054-0057]).
Response to Arguments
Applicant’s arguments with respect to the rejection of the independent claims under 35 USC §103, obviousness, have been considered but are moot because the new ground of rejection does not rely on the reference combination applied in the prior rejection of record for matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARRON SANTOS whose telephone number is (571)272-5288. The examiner can normally be reached Monday - Friday: 8:00am - 4:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANGELA ORTIZ can be reached at (571) 272-1206. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.S./Examiner, Art Unit 3663
/ANGELA Y ORTIZ/Supervisory Patent Examiner, Art Unit 3663