DETAILED ACTION
This action is responsive to the application filed on 11/06/2025. Claims 1-11, 13-25 are pending and have been examined. This action is Non-final.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C.
120, 121, 365(c), or 386(c) is acknowledged.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/06/2025 has been entered.
Response to Arguments
Argument 1: The applicant argues that the claims are not mental processes because, in practice, a human could not possibly carry out the recited steps, especially while a vehicle is driving. They point to claim 1 as an example: it requires receiving dense sensor data from LIDAR and cameras, determining spatial locations of many surrounding objects using bounding boxes and semantic segmentation into classes like vehicles, pedestrians, and animals, assigning care or no-care attributes to each label based on combined radar and auxiliary data, generating model predictions, and applying a detailed loss function with positive and negative contributions and confidence thresholds. The applicant says there is far too much data and too many calculations for a person with pen and paper to perform in a useful time frame, so these steps must be done by a computer system in a moving vehicle. They also cite the specification’s statement that reliable labels cannot be derived from radar data directly by humans or by another algorithm, and that LIDAR-based labels are used as cross-domain ground truth, to show that the invention is tied to specific sensor processing in an operating vehicle, not mental reasoning. On this basis, the applicant asks that the 101 rejection be withdrawn and notes that new claims 21 to 23 add further subject-matter-eligible features beyond claims 1, 11, and 20.
Examiner Response to Argument 1: The examiner has considered the elements set forth above, however, applicant’s arguments do not overcome the rejection under 101 because, as mapped, the amended claims remain directed to mental processes and mathematical concepts implemented on generic sensors and computer hardware, with only field-of-use and data-gathering limitations. In claim 1 (and similarly in claims 11 and 20), the core steps are identifying labels based on dense auxiliary data and “at least one property of entities,” where that property is expressly defined as i) generating bounding boxes around objects and ii) performing semantic segmentation into object classes such as vehicle, pedestrian, and animal, and then assigning each label a care or no-care attribute by determining a perception capability and comparing a reference value to a threshold, and under the mapping this is evaluation, classification, and labeling of information using observation and rule-based decision-making that can, in principle, be done by a person with pen and paper (for example, drawing boxes around objects, naming them, and marking them care/no-care depending on whether a score exceeds a threshold), so it is a mental process. Likewise, defining a loss function that receives positive and negative loss contributions so that weights are increased or decreased depending on whether they contribute constructively or not, permitting negative contributions for all labels, permitting positive contributions for labels with a care attribute, and permitting positive contributions for labels with a no-care attribute only when a confidence value exceeds a predetermined threshold are mental processes and same for if/then conditions on numerical values (predictions, labels, weights, confidence and thresholds) that the mapping identifies as mental processes that could also be worked out on paper for a small set of values, and mental processes is also “generating model predictions for the labels”. The dependent claim mappings further show that the additional features consist of more mental and mathematical steps, such as determining a numerical reference value from radar energy in a spatial area (claim 3 / 14), computing ranges and angles from radar data and assigning them to spatial areas to decide care or no-care (claim 4 / 15), estimating expected range, range rate, and angle from dense auxiliary data and assigning those expected values to radar-derived values (claim 5 / 16), estimating range rate from a speed vector computed as differences of label positions over time (claim 6 / 17), selecting subsets of auxiliary data points in a spatial area, determining whether a direct line of sight exists, and assigning care when a ratio of counts exceeds a threshold (claim 7 / 10), regarding a point as having line of sight when it lies within a field of view of one of multiple radar sensors (claim 8 / 18), and projecting selected points to a cylinder or sphere, dividing the surface into pixel areas, marking the closest point in each pixel as visible, counting visible points, and comparing that count to a visibility threshold (claim 9 / 19), in each instance the mapping explains that these are human-mind-capable acts of selecting, calculating, projecting, counting and comparing that can be done with pen and paper, so they, too, are mental processes or mathematical concepts. Step 2A prong 2 and Step 2B in the mapping make clear that reciting that these abstract steps are “implemented within a vehicle while the host vehicle is driving” and that the data are captured by at least one radar primary sensor and at least one lidar / camera auxiliary sensor merely identifies the environment and the generic sources of data and is treated as insignificant extra-solution activity or field-of-use limitation, and similarly that performing the analysis “via a machine-learning algorithm” or on a generic processing unit is recited at a high level of generality, with no specific improvement to computer function, and therefore does not integrate the exception into a practical application or add significantly more. The mapping for claim 11 confirms that the system claim simply recites a primary radar sensor and auxiliary lidar/camera sensor plus a processing unit configured to perform the same abstract analysis, which is just a generic machine implementation of the method. Finally, the mappings for new claims 21-25 show that these claims add only further mathematical concepts and field-of-use language: basing actions “on the loss function,” adjusting training parameters based on the loss function, controlling adjustments via an error-based loss function comparing predictions and target outputs, and, in claims 21, 24, and 25, broadly stating that, based on the loss function, the method or system may assist in or autonomously drive the vehicle, these are all framed at a results-oriented, functional level with no added technical detail about how vehicle control is implemented, and the mapping therefore reasonably characterizes them as either mental / mathematical operations or merely limiting the abstract idea to the driving environment under MPEP 2106.05(f) and (h). With respect to applicant's remarks that claims 22 and 23 were added in response to a suggestion made in the October 16, 2025 interview and therefore "should contain statutory subject matter," this has been considered but is not persuasive. The suggestion concerned providing more detail on training. The language actually added in claims 22 and 23, however, merely describes generic supervised training: adjusting parameters of the machine-learning algorithm based on a loss function, and defining that loss function as an error signal from comparing predictions to a target output representative of the labels. Under their broadest reasonable interpretation and as mapped, these limitations recite the standard supervised-learning paradigm of computing an error between predictions and labels and updating parameters to reduce that error, which is itself a mathematical concept and mental process and does not add any particular technological implementation or improvement to computer functioning. Thus, although claims 22 and 23 respond to the interview suggestion by adding training-related language, they do not change the character of the claim set away from an abstract idea. Accordingly, even accepting applicant’s assertion that a human driver practically could not carry out all of these calculations in real time, the claims as written remain, under their broadest reasonable interpretation, directed to mental processes and mathematical concepts executed on generic sensors and processors with only insignificant data gathering and field-of-use limitations, so the rejection of all pending claims under 101 is properly maintained.
Argument 2: Applicant argues that Musk does not teach the claimed “perception capability” based care/no-care assignment because Musk’s sensor configuration and thresholding are different from what is recited in the claims. In the present claims, the primary sensor is explicitly a radar sensor and the auxiliary sensor is a LIDAR and/or camera, and the care or no-care attribute is assigned based on a reference value computed from sparse primary (radar) data for each spatial area and compared to a reference threshold. Applicant points out that in Musk the camera functions as the primary vision sensor and the radar and LIDAR act as auxiliary sensors whose data is associated with the camera image. According to applicant, Musk’s “threshold value” is applied to decide whether auxiliary sensor data is reliable enough to be used as ground truth for the camera-based model, and thus the threshold concerns auxiliary data, not a perception capability of a primary radar sensor per label or spatial area. For this reason, applicant contends that Musk does not teach the claimed step of determining a perception capability of the primary sensor from sparse primary radar data, computing a reference value per spatial area, and assigning care versus no-care based on that primary-sensor reference value.
Examiner Response to Argument 2: The examiner has considered the argument set forth above, but it is not persuasive. The examiner has considered the argument set forth above, but it is not persuasive. As an initial matter, applicant’s emphasis on “sparse” primary data and “dense” auxiliary data is not persuasive, because “sparse” and “dense” are relative terms of degree that do not provide a clear boundary for the scope of the claims. A rejection under 35 U.S.C. 112(b) will be made in this RCE for these terms as being indefinite, and, consistent with that, the “sparse” versus “dense” characterization does not carry patentable weight in the present 101 analysis. Applicant’s position also relies on importing the exemplary embodiment from paragraph 0022 of the specification (for example, “average of an intensity of the primary data”) into the claims. However, the claims merely recite that “the sparse primary data [is] usable to determine a reference value for a respective spatial area” and that a “perception capability” is used to assign care or no-care, without requiring any particular formula or that the reference value be an explicit average intensity. Under a broadest reasonable interpretation, a “perception capability” is simply a measure of how reliably the primary sensor can perceive a given label in its spatial area, and a “reference value” can be any scalar quality or certainty measure derived from the primary sensor data in that area. Musk teaches this same structure by using a radar sensor to emit radar and “identify the distance and direction of surrounding obstacles,” correlating these measurements to objects in the camera image, and then using “a threshold value… to determine whether to associate an object property as a ground truth of an identified object,” where “related data with a high degree of certainty is associated with an identified object while related data with a degree of certainty below a threshold value is not associated with the identified object.” For each object region in the camera image (that is, the spatial area or bounding box of the label), Musk derives from radar and other sensor data a certainty or reliability and compares it to a threshold to decide whether that object’s sensor measurement is used or ignored for training. The examiner interprets this certainty score as the claimed “reference value” for the spatial area and the threshold comparison as determining the primary sensor’s “perception capability” for that label, resulting in the same effect as assigning a “care” attribute when the reference value exceeds the threshold and a “no-care” attribute otherwise. Applicant further argues that Musk’s “primary” sensor is a camera and that radar and lidar are “auxiliary,” so Musk allegedly does not determine a perception capability of a primary radar sensor as claimed. This is also not persuasive. Musk expressly states that the “captured data includes vision data (such as video and/or still images) and additional auxiliary data such as radar, lidar, inertia, audio, odometry, location, and/or other forms of sensor data,” and further discloses that “in some embodiments, the related data may be conflicting sensor data. For example, ultrasonic and radar data output may conflict. In various embodiments, a threshold value is used to determine whether to associate an object property as a ground truth of an identified object” (citations omitted for brevity). Thus Musk teaches using both radar and lidar together and comparing different sensor modalities via a threshold to decide which measurement to trust. The labels “primary” and “auxiliary” in Musk are functional and arbitrary. It’s obvious to designate radar as the primary sensor and lidar and/or camera as auxiliary sensors while applying the same thresholding and conflict-resolution logic to determine which sensor has better perception capability for each object and spatial area. Moreover, even in the configuration where the camera is treated as “primary” in Musk, any threshold-based decision that compares radar and lidar confidence to decide whether to accept a given object property as ground truth is still, in substance, a determination about how well the sensor under evaluation can perceive that object in that area. In other words, measuring the auxiliary sensor’s confidence and using that to decide whether to trust or discard an object property associated with the primary image is still a determination of perception capability for the sensor whose performance in that region is being evaluated. Accordingly, under a broadest reasonable interpretation of “primary sensor” and “perception capability,” Musk teaches determining a perception capability from sensor data on a per-label spatial area and applying a threshold to that reference value, even if Musk does not use the exact terminology of the instant application or uses a different naming convention for primary versus auxiliary sensors. The examiner maintains their position.
Argument 3: Applicant argues that Northcutt does not teach the claimed care/no-care loss behavior. Northcutt appears to provide a different treatment for positive and negative contributions to a loss function, this treatment does not depend on a further attribute previously provided.
Examiner Response to Argument 3: The examiner has considered the argument set forth above, but it is not persuasive. The examiner asserts that it is unclear which specific limitation is being referred to, as none recites anything about “different treatment” or “treatment … depend on a further attribute previously provided”. The examiner further asserts that Northcutt does teach the limitation about positive and negative contributions to the loss function based on confidence (see mapping below), and arguments about Northcutt not teaching that the confidence is based on a perception capability of a sensor amounts to attacking references individually when a rejection is based on a combination of references. The examiner maintains their position.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AlA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
Claim 11: “a processing unit configured to be used by the machine-learning algorithm” invokes 112(f).
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AlA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
The specification mentions corresponding structure described in the specification. Processing unit 17 teaches the underlying structure of the “means configured to be used by the machine-learning algorithm” in fig. 2 paragraph [0048] “FIGS. 1 and 2 depict a host vehicle 11 which includes radar sensors 13 (see FIG. 2 ) and a LIDAR system 15 which are in communication with a processing unit 17. As shown in FIG. 1 , other vehicles are located in the environment of the host vehicle 11. The other vehicles are represented by bounding boxes 19 which are also referred to as labels 19 since these bounding boxes are provided based on data from the LIDAR system 15 for training a machine-learning algorithm. The training of the machine-learning algorithm is performed via the processing unit 17 (which also executes the algorithm itself) and uses primary data provided by the radar sensors 13.”
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AlA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AlA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AlA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing ou and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-25 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention
The claims recite, for example, “sparse primary data” and “dense auxiliary data” (see, ex. claim 1 and claims depending therefrom (sparse: 1,4,5, 10, 11, 15-16, and 20; Dense: 1, 5,6, 10, 11,16-20)). The terms “sparse” and “dense” are relative terms of degree. The claims do not provide any objective boundary for determining when sensor data is “sparse” versus “dense,” and the specification does not set forth any clear standard, threshold, or quantitative criterion that would allow one of ordinary skill in the art to determine with reasonable certainty whether a given instance of sensor data falls within or outside the scope of these terms. Instead, “sparse” and “dense” are used qualitatively and subjectively, and their meaning depends on unspecified factors such as sensor resolution, sampling rate, or point density, which can vary widely across implementations.
While relative terms can in some cases be definite when the specification or the state of the art provides an accepted standard, here there is no such objective standard identified for “sparse primary data” and “dense auxiliary data.” As a result, one of ordinary skill in the art cannot ascertain with reasonable certainty the metes and bounds of the claimed subject matter based on these terms. Accordingly, the recitation of “sparse primary data,” “dense auxiliary data,” and similar “sparse”/“dense” formulations renders the scope of the claims indefinite under 35 U.S.C. 112(b).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition
of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the
conditions and requirements of this title.
Claims 1-11, 13-25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 1, (similar to 11 and analogous to 20)
Step 1: The claim is directed to a method, which is considered to be a process, and it is an allowable subject matter. The claim satisfies step 1.
Step 2A Prong 1:
“determine at least one property of entities in an environment of the at least one primary sensor… identifying labels based on the dense auxiliary data, the identifying labels comprising determining a respective spatial area to which each label is related;… assigning at least one of a care attribute or a no-care attribute to each identified label by determining a perception capability of the at least one primary sensor for the respective label…the primary data usable to determine a reference value for a respective spatial area and, for each label, the care attribute is assigned to the respective label if the reference value is greater than a reference threshold and the no-care attribute is assigned to the respective label if the reference value is smaller than or equal to the reference threshold;” – The limitation is directed to determining entities of an environment from a sensor, identifying labels from data, assigning an attribute to the label once determining the capability of the primary sensory, and further elements for which all that is recited above is directed to a process that can be performed in the human mind using evaluation, observation and/or judgment with the aid of pen and paper, and thus is directed to a mental process.
“defining a loss function for the model predictions, wherein the loss function receives a positive loss contribution for which weights of a model on which the machine-learning algorithm relies are increased if the weights contribute constructively to a prediction corresponding to the respective label and a negative loss contribution for which weights of the model are decreased if the weights contribute constructively to a prediction not corresponding to the respective label;”—This limitation is directed to defining a loss function for predictions, where a positive or negative loss contribution will be received once its determined that the weights will contribute the prediction constructively, then the weights are increased or decreased, which is directed to a process that can be performed in the human mind using evaluation, observation and/or judgment with the aid of pen and paper, and thus is directed to a mental process.
“permitting negative contributions to the loss function for all labels; permitting positive contributions to the loss function for labels having a care attribute; and permitting positive contributions to the loss function for labels having a no-care attribute only if a confidence value of the model prediction for the respective label is greater than a predetermined threshold.” – This limitation is directed to permitting negative or positive contributions to the loss function for the labels/attribute, and comparing attributes that have a confidence value to a predetermined threshold, which is all directed to a process that can be performed in the human mind using evaluation, observation and/or judgment with the aid of pen and paper, and thus is directed to a mental process.
“generating model predictions for the labels” – The limitation is directed to generating mode predictions for the labels, which is directed to a process that can be performed in the human mind using evaluation, observation and/or judgment with the aid of pen and paper, and thus is directed to a mental process.
Step 2A Prong 2 and Step 2B:
“process primary data captured by at least one primary sensor…receiving dense auxiliary data from at least one auxiliary sensor” –This limitation recites processing data that is obtained and then receiving data from a sensor, which is all considered to be directed mere data gathering and obtaining of data to be manipulated, which is considered to be an insignificant-extra solution activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, under step 2B, the act of receiving/sending data over a network is a well-understood, routine, and conventional activity (WURC), and cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)).
“via a machine learning algorithm” -- This limitation recites that the mental process of generating model predictions (see prong 1) will be performed via a machine learning algorithm, however this limitation is mere instructions to implement the mental process on a generic computer, because machine learning algorithm is recited broadly at a high level of generality, and thus cannot be integrated to a practical application, nor provide significantly more than the judicial exception (see MPEP 2106.05(f)).
Thus, claim 1 non-patent eligible. Claims 11 and 20, are analogous to claim 1, and thus will face the same rejection as above granted that the limitations are very similar, aside from being different type of claims (method vs system vs non-transitory CRM).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition
of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the
conditions and requirements of this title.
Claims 1-11, 13-25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 1, (similar to 11 and analogous to 20)
Step 1: The claim is directed to a method, which falls under the category of a process. The claim satisfies Step 1.
Step 2A Prong 1:
“identifying labels based on the dense auxiliary data and the at least one property of entities, the identifying labels comprising determining a respective spatial area to which each label is related, wherein the at least one property of entities comprises i) a spatial location of objects surrounding the vehicle by generating bounding boxes which enclose the objects, respectively, and ii) a semantic segmentation including assignment of the objects surrounding the vehicle to respective object classes, the object classes including other vehicle, pedestrian, and animal; assigning at least one of a care attribute or a no-care attribute to each identified label by determining a perception capability of the at least one primary sensor for the respective label based on the sparse primary data captured by the at least one primary sensor and based on the dense auxiliary data captured by the at least one auxiliary sensor, the sparse primary data usable to determine a reference value for a respective spatial area and, for each label, the care attribute is assigned to the respective label if the reference value is greater than a reference threshold and the no-care attribute is assigned to the respective label if the reference value is smaller than or equal to the reference threshold;” -- The limitation is directed to analyzing information about objects in the environment (now expressly including generating bounding boxes and performing semantic segmentation into object classes such as vehicle, pedestrian, and animal), determining spatial areas for labels, and assigning each label a care or no-care attribute based on a perception capability and a comparison of a reference value to a threshold. Under a broadest reasonable interpretation, this is evaluation, classification, and labeling of information using observation, reasoning, and rule-based decision-making (e.g., deciding which objects in a scene are important based on a score and a threshold). Such steps can be performed in the human mind, with the aid of pen and paper (for example, looking at a scene, drawing boxes around objects, labeling them by type, and marking them as care/no-care depending on whether an associated value exceeds a threshold). Thus, this limitation is directed to a mental process.
“defining a loss function for the model predictions, wherein the loss function receives a positive loss contribution for which weights of a model on which the machine-learning algorithm relies are increased if the weights contribute constructively to a prediction corresponding to the respective label and a negative loss contribution for which weights of the model are decreased if the weights contribute constructively to a prediction not corresponding to the respective label;” – This limitation is directed to defining and evaluating a loss function for predictions and adjusting weights of a model based on whether they contribute constructively or not. This is a mathematical operation on numerical values (predictions, labels, and weights) and can ALSO be performed in the human mind or using pen and paper for a small set of weights. Thus, this limitation is directed to a mental process / mathematical concept.
“permitting negative contributions to the loss function for all labels; permitting positive contributions to the loss function for labels having a care attribute; and permitting positive contributions to the loss function for labels having a no-care attribute only if a confidence value of the model prediction for the respective label is greater than a predetermined threshold.” – This limitation is directed to applying logical conditions (if/then rules) to decide whether certain positive or negative contributions are included in the loss function based on label attributes and a comparison of a confidence value to a threshold. These are conditional mathematical operations and comparisons that can be performed in the human mind using evaluation, observation, and judgment with the aid of pen and paper. Thus, this limitation is directed to a mental process.
“generating model predictions for the labels” – This limitation is directed to generating predictions for labels, i.e., applying a model or rule to input information to obtain outputs. Conceptually, a person could perform this step by applying a decision rule or formula on paper to “predict” a label outcome. Thus, this limitation is also directed to mathematical concept. Further support is presented in the instant application, [0034] “to generate model predictions for the labels via the machine-learning algorithm, to define a loss function for the model predictions, to permit negative contributions to the loss function for all labels,”.
Step 2A Prong 2 and Step 2B:
“A method for training a machine-learning algorithm configured to process sparse primary data captured by at least one primary sensor in order to determine at least one property of entities in an environment of the at least one primary sensor, the method implemented within a vehicle while the host vehicle is driving and comprising: receiving dense auxiliary data from at least one auxiliary sensor, wherein the at least one auxiliary sensor comprises at least one of a light ranging and detection (LIDAR) sensor and a camera, and wherein the at least one primary sensor comprises at least one radar sensor;” – This limitation recites that the abstract analysis is carried out in the context of a vehicle while it is driving, using specific sensors (radar as the primary sensor and LIDAR/camera as auxiliary sensors) to capture sparse primary data and dense auxiliary data. The limitation is directed to an insignificant, extra-solution activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, under Step 2B, the act of sending/receiving data over a network is a well-understood, routine, and conventional activity (WURC), that cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)).
“via a machine-learning algorithm” –The limitation recites that the model predictions are generated using a machine-learning algorithm, but the algorithm is recited at a high level of generality without any particular technical implementation or improvement to computer functionality, and does not integrate to a practical application, nor provides significantly more than the judicial exception (see MPEP 2106.05(f)).
Therefore, claim 1 is non-patent eligible.
Regarding claim 2, (analogous to claim 13)
Step 1: The claim is directed to a method, which is considered to be a process, and it is an allowable subject matter. The claim satisfies step 1.
Step 2A Prong 1:
“The method according to claim 1, wherein the predetermined threshold for the confidence value is zero.” – The limitation is directed to the predetermined threshold for the confidence value first introduced in claim 1 is set to zero, which can be evaluated and determined by the human mind to be valued at zero, and thus is directed to a mental process.
There are no elements to be evaluated under step 2A Prong 2 and Step 2B.
Thus, claim 2 non-patent eligible. Claim 13 is analogous to claim 2, and thus will face the same rejection as above granted that the limitations are very similar, aside from being different type of claims (method vs system).
Regarding claim 3, (analogous to claim 14)
Step 1:The claim is directed to a method, which is considered to be a process, and it is an allowable subject matter. The claim satisfies step 1.
Step 2A Prong 1:
“wherein the reference value is determined based on radar energy detected by the radar sensor within the spatial area to which the respective label is related.” – The limitation is directed to determining a numerical reference value from radar energy detected in a spatial area associated with a label. Under a broadest reasonable interpretation, this is calculating or deriving a value from collected sensor data and associating that value with a region/label. Such operations (deriving a value from measurements and assigning it to a labeled region) can be performed in the human mind using evaluation, calculation, and judgment with the aid of pen and paper, and thus are directed to a mental process.
There no elements to be evaluated under Step 2A Prong 2 and Step 2B.
Thus, claim 3 is non-patent eligible. Claim 14 is analogous to claim 3, and therefore will face the same rejection as above granted that the limitations are very similar, aside from being different type of claims (method vs system).
Regarding claim 4, (analogous to claim 15)
Step 1: The claim is directed to a method, which is considered to be a process, and it is an allowable subject matter. The claim satisfies step 1.
Step 2A Prong 1:
“The method according to claim 3, wherein: ranges and angles at which radar energy is perceived are determined based on the sparse primary data captured by the radar sensor; and the ranges and angles are assigned to the spatial areas” – The limitation is directed to determining ranges and angles (for which can be calculated/generated using a pen and paper) that derives from data that was obtained by the radar sensor, and assigning the ranges and angles to spatial areas, for which can be done using observation and judgement, and thus is considered a mental process.
“the respective labels are related in order to determine the at least one of the care attribute or the no-care attribute for each label.” – The limitation is directed to related labels for determining if an attribute is classified as care or non-care per label, which is directed to a mental process.
There are no elements to be evaluated under Step 2A Prong 2 and Step 2B.
Thus, claim 4 is non-patent eligible. Claim 15 is analogous to claim 4, and thus will face the same rejection as above granted that the limitations are very similar, aside from being different type of claims (method vs system).
Regarding claim 5, (analogous to claim 16)
Step 1: The claim is directed to a method, which is considered to be a process, and it is an allowable subject matter. The claim satisfies step 1.
Step 2A Prong 1:
“an expected range, an expected range rate and an expected angle are estimated for each label based on the dense auxiliary data;” – The limitation is directed to estimating expected range, its rate and the angle for the labels based on aux data. Estimating expected values for labels based on data is directed to a process that can be performed in the human mind using pen and paper, and thus it’s directed to a mental process.
“the expected range, the expected range rate and the expected angle of the respective label are assigned to a range, a range rate and an angle derived from the sparse primary data of the radar sensor in order to determine the radar energy associated with the respective label.” – The limitation is directed to assigning the expected range, rate and angle with the respective labels to a what is derived from primary data of the radar sensor to determine the radar energy that is associated with a respective label. The act of assigning range rates and angles’ labels to a range, rate and label is directed to a process that can be performed in the human mind using evaluation, observation and/or judgment with the aid of pen and paper, and thus is directed to a mental process.
There are no elements to be evaluated under Step 2A Prong 2 and Step 2B.
Thus, claim 5 is non-patent eligible. Claim 16 is analogous to claim 5, and thus will face the same rejection as above granted that the limitations are very similar, aside from being different type of claims (method vs system).
Regarding claim 6, (analogous claim 17)
Step 1: The claim is directed to a method, which is considered to be a process, and it is an allowable subject matter. The claim satisfies step 1.
Step 2A Prong 1:
“The method according to claim 5, wherein the expected range rate is estimated for each label based on a speed vector which is estimated for a respective label” – The limitation is directed to estimating the range rate for the labels based on an estimated speed vector for the respective label. Estimating range rates based on an estimated speed vector for labels is directed a process that can be performed in the human mind using pen and paper, and thus it’s directed to a mental process.
“by using differences of label positions determined based on the dense auxiliary data at different points in time.” – The limitation is directed to determining label positions based on data in varied times, which under broadest reasonable interpretation (BRI), it is directed to a process that is capable of being performed using the human mind (with pen and paper as well), and thus the limitation is directed to a mental process.
There are no elements to be evaluated under Step 2A Prong 2 and Step 2B.
Thus, claim 6 is non-patent eligible. Claim 17 is analogous to claim 6, and thus will face the same rejection as above granted that the limitations are very similar, aside from being different type of claims (method vs system).
Regarding claim 7, (similar to claim 10 and part of claim 1.)
Step 1: The claim is directed to a method, which is considered to be a process, and it is an allowable subject matter. The claim satisfies step 1.
Step 2A Prong 1:
“The method according to claim 2, wherein: a subset of auxiliary data points is selected which are located within the spatial area related to the respective label; for each auxiliary data point of the subset, it is determined whether a direct line of sight exists between the at least one primary sensor and the auxiliary data point; and for each label, a care attribute is assigned to the respective label if a ratio of a number of auxiliary data points for which the direct line of sight exists to a total number of auxiliary data points of the subset is greater than a further predetermined threshold.” – The limitation is directed to selecting data points related to a respective label, determining if a direct line of sight will exist in the data points, and assigning a care attribute to a label if the ration value is greater than a predetermined threshold. The limitation is directed to a process that can be performed using the human mind using observation, evaluation, and judgement. Thus, the limitation is directed to a mental process.
There are no elements to be evaluated under Step 2A Prong 2 and Step 2B.
Thus, claim 7 is non-patent eligible. Claim 10 and a part of claim 1 is analogous to claim 7, and thus will face the same rejection.
Regarding claim 8, (analogous to claim 18)
Step 1: The claim is directed to a method, which is considered to be a process, and it is an allowable subject matter. The claim satisfies step 1.
Step 2A Prong 1:
“and the auxiliary data point is regarded as having a direct line of sight to the at least one primary sensor if the auxiliary data point is located within an instrumental field of view of at least one of the radar sensors and has a direct line of sight to at least one of the radar sensors.” – The limitation is directed to “regarding” data points to many factors like having a direct line of sight to a primary sensor of the if it is determined that the data point is located in the field of view of the radar sensors and will have a direct line of sight, which is directed to a process that can be performed in the human mind using evaluation, observation and/or judgment with the aid of pen and paper, and thus is directed to a mental process.
Step 2A Prong 2 and Step 2B:
“The method according to claim 7, wherein: the at least one primary sensor includes a plurality of radar sensors; -- The limitation recites that the primary sensor, at least one, would include a group of radar sensors, which is merely just limiting the primary sensor to a field of use, and cannot be integrated to a practical application, nor provide significantly more than the judicial exception (see MPEP 2106.05(h)).
Thus, claim 8 is non-patent eligible. Claim 18 is analogous to claim 8, and thus will face the same rejection as above granted that the limitations are very similar, aside from being different type of claims (method vs system).
Regarding claim 9, (analogous to claim 19).
Step 1: The claim is directed to a method, which is considered to be a process, and it is an allowable subject matter. The claim satisfies step 1.
Step 2A Prong 1:
“for each pixel area, the auxiliary data point having a projection within the respective pixel area and being closest to the respective radar sensor is marked as visible; for each label, a number of visible auxiliary data points is determined which are located within the spatial area related to the respective label and which are marked as visible for at least one of the radar sensors; and the care attribute is assigned to the respective label if the number of visible auxiliary data points is greater than a visibility threshold.” – The limitation is directed to marking data points as visible and assigning care attributes to a label if the number of visible data points is greater than a determined visibility threshold, which is all directed to a process that can be performed using the human mind using observation, evaluation, and judgement. Thus, the limitation is directed to a mental process.
“for each of the radar sensors, a specific subset of the auxiliary data points is selected for which the auxiliary data points are related to a respective spatial area within an instrumental field of view of the respective radar sensor; -- The limitation is directed to “selecting” a subset of data points and then related to a spatial areas within a field of view of the radar sensor. The act of selecting data points to be designated to a certain area can be done in a mental process and is human mind-capable, and thus it is directed to a mental process.
“the auxiliary data points of the specific subset are projected to a cylinder or sphere surrounding the respective radar sensor; a surface of the cylinder or sphere is divided into pixel areas;” – The limitation is directed to a specific subset of data points to be projecting to a cylinder or sphere and dividing the geometric space into pixel areas. The act of projecting data points to a geometric space, such as a cylinder, is directed to a known mathematical concept, and thus is directed to math.
There are no elements to be evaluated under Step 2A Prong 2 and Step 2B.
Thus, claim 9 is non-patent eligible. Claim 19 is analogous to claim 9, and thus will face the same rejection as above granted that the limitations are very similar, aside from being different type of claims (method vs system).
Regarding claim 11, The majority of the claim’s limitation is analogous to claims 1 and 20 (see rejection of claim 1 above and/or claim 20 below). Below is claim 11’s further limitations evaluated under 101:
Step 1: The claim is directed to a system, for which is directed to a machine, which is an allowable subject matter. The claim satisfies step 1.
There are no elements to be evaluated under step 2A Prong 1.
Step 2A Prong 2 and Step 2B:
“A system for training a machine-learning algorithm, the