DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application EP23189076.5 filed on 8/1/23. It is noted, however, that applicant has not filed a certified copy of the foreign application as required by 37 CFR 1.55. Applicant needs to do this to claim the foreign priority.
Claim Objections
Claims 7, 12-13, and 18 are objected to because of the following informalities:
In claims 1 and 13, “all pluralities of sensor data nodes” should be “all of the plurality of sensor data nodes”
In claims 1 and 13, “all pluralities of sensor data edges” should be “all of the plurality of sensor data edges”
In claims 7, 12, and 18, “a n-nearest-neighbors relationship” should be “an n-nearest-neighbors relationship”
In claim 13, “be-tween” should be “between” (remove the dash)
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are:
“one or more advanced driver assistance systems” configured to process data in claim 2
“feature extractor configured to extract one or more features” in claims 4 and 15
“at least one processing unit and configured to store machine-readable instructions, wherein the machine-readable instructions cause the at least one processing unit to” perform various processing functions in claim 13
Because these claim limitations are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. A review of the specification reveals the following:
“one or more advanced driver assistance systems”: discloses as “any kind of system-on chip” in at least [0058] of the specification. This is adequate structure to perform the claimed function.
“feature extractor configured to extract one or more features”: extraction is one of the steps of Fig. 1 of the disclosure (See at least Fig. 1 and [0030] in the specification). The steps of Fig. 1 are performed by automotive control unit 800 (See at least [0021] in the specification). That control unit is disclosed as at least comprising a processor (See at least [0055]-[0056] in the specification). This is adequate structure to perform the claimed function.
“at least one processing unit”: disclosed as a processor in at least [0055]-[0056] of the specification. This is adequate structure to perform the claimed functions.
Since there is adequate structure in the specification to perform the claimed functions, no 112 rejections are given and no further action is required by applicant with respect the above 112(f) interpretation.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The claimed invention is directed to the concept of determining multiple graphs containing sensor data for different sensors, determining a transformation matrix describing a relationship between at least two of those graphs, combining multiple of the graphs together using the transformation matrix, and generating perception data based on the fused graph. This judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception and do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Regarding claims 1, 13, and 20, applicant recites, mutatis mutandis, An automotive control unit, comprising:
at least one processing unit; and
a memory coupled to the at least one processing unit and configured to store machine-readable instructions,
wherein the machine-readable instructions cause the at least one processing unit to:
obtain a plurality of automotive sensor data graphs, each automotive sensor data graph being based on sensor data captured by a corresponding automotive sensor of a plurality of automotive sensors, wherein:
each automotive sensor data graph comprises a plurality of sensor data nodes and a plurality of sensor data edges,
each sensor data node includes sensor data captured by the corresponding automotive sensor, and
each sensor data edge defines a sensor distance relationship between two sensor data nodes of the plurality of sensor data nodes;
obtain at least one calibration matrix, the at least one calibration matrix defining a transformation between respective pluralities of sensor data nodes of at least two automotive sensor data graphs;
generate a fused automotive sensor data graph based on the plurality of automotive sensor data graphs and the at least one calibration matrix, wherein:
the fused automotive sensor data graph comprises a plurality of fused automotive sensor data nodes and a plurality of fused automotive sensor data edges,
the plurality of fused automotive sensor data nodes includes all pluralities of sensor data nodes,
the plurality of fused automotive sensor data edges comprises all pluralities of sensor data edges and a plurality of fusion edges,
and each fusion edge defines a fusion distance relationship be-tween two sensor data nodes of two pluralities of sensor data nodes; and
provide the fused automotive sensor data graph to an automotive perception function, the automotive perception function being implemented by a graph neural network and being configured to generate perception data based on the fused automotive sensor data graph.
Claim 1 recites a series of steps and therefore is directed to a process. Claim 13 recite an automotive control unit and therefore is directed to an apparatus. Claim 20 recites a vehicle and therefore is also directed to an apparatus. All of these satisfy step 1 of the Section 101 analysis. Under the two-prong inquiry, the claim is eligible at revised step 2A unless: Prong One: the claim recites a judicial exception; and Prong Two: the exception is not integrated into a practical application of the exception.
The above claim steps are directed to the concept of determining multiple graphs containing sensor data for different sensors, determining a transformation matrix describing a relationship between at least two of those graphs, combining multiple of the graphs together using the transformation matrix, and generating perception data based on the fused graph, which is an abstract idea that can be performed by a user mentally or manually and falls within the Mental Processes grouping. (Prong one: YES, recites an abstract idea).
Other than reciting the use of a plurality of automotive sensors in claims 1, 13, and 20; An automotive control unit in claims 13 and 20; at least one processing unit and a memory in claim 13; and A vehicle in claim 20, nothing in the claim elements precludes the steps from being performed entirely by a human. The use of one or more computing devices is insufficient to amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. (Prong Two: NO, does not recite additional elements that integrate the abstract idea into a practical application similar to that shown in MPEP 2106.05).
Under step 2B, the claimed invention does not recite additional elements that are indicative of an inventive concept. The additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. The plurality of automotive sensors are described in paragraph [0023] of applicant’s specification as merely generic sensors, such as radar, lidar, etc. as are understood to be standard computing devices in the vehicle art. The automotive control unit, at least one processing unit, and memory are described in paragraphs [0055]-[0056] of the specification as merely general-purpose computer components, such as memories and processors. The vehicle, in the claims, does not perform any control of vehicle actuators, but instead just generally functions as a computing environment, as described in at least [0066] in the specification, where the computer components are located. Therefore these additional limitations are no more than mere instructions to apply the exception using generic computer components. The recitation of generic processors/computers does not take the above limitations out of the mental processes grouping.
Moreover, the implementation of the abstract idea on generic computers and/or generic computer components does not add significantly more, similar to how the recitation of the computer in Alice amounted to mere instructions to apply the abstract idea on a generic computer. The claims merely invoke the additional elements as tools that are being used in their ordinary capacity. Further, the courts have found that simply limiting the use of the abstract idea to a particular environment does not add significantly more. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide generic computer implementation.
Examiner’s note to help applicant overcome the 101 rejections: applicant can overcome the 101 rejections by presenting a persuasive argument that applicant’s claims are analogous to Example 42, claim 1 of the USPTO’s Subject Matter Eligibility Examples 37 to 42. Claim 1 of Example 42 is a medical data updating system that gathers non-standardized data from different sources and standardizes the data into a single standardized format before presenting the standardized data to a user. Despite being directed to data gathering and processing, Claim 1 of Example 42 was nevertheless found to be eligible because, “the additional elements recite a specific improvement over prior art systems by allowing remote users to share information in real time in a standardized format regardless of the format in which the information was input by the user” (See at least Example 42, Claim 1, in the USPTO’s Subject Matter Eligibility Examples 37 to 42).
Likewise, applicant can argue that applicant’s claims are also eligible because the additional elements of applicant’s claims amount to a specific improvement over prior art systems by allowing a vehicle perception system to obtain information from various different sensors in a standardized format (i.e., a fused graph) regardless of the format in which the information was first gathered by the respective sensor. In making such an argument, applicant should be sure to explicitly cite to Example 42, Claim 1, in order to make the analogy as clear as possible.
If applicant persuasively makes such an argument, then examiner will withdraw the 101 rejections.
Regarding claim 2, applicant recites The method of claim 1, wherein the perception data is configured to be processed by one or more advanced driver assistance systems.
However, a human can mentally or manually process perception data.
Moreover, the one or more advanced driver assistance systems could merely be general purpose computer components, so nothing in the claim elements precludes the steps from being performed entirely by a human. The use of one or more computing devices is insufficient to amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. (Prong Two: NO, does not recite additional elements that integrate the abstract idea into a practical application similar to that shown in MPEP 2106.05).
Under step 2B, the claimed invention does not recite additional elements that are indicative of an inventive concept. The additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. The one or more advanced driver assistance systems are described in paragraph [0058] of applicant’s specification as being implemented by mere general-purpose computers. Therefore this limitation is no more than mere instructions to apply the exception using generic computer components. The recitation of generic processors/computers does not take the above limitations out of the mental processes grouping.
Moreover, the implementation of the abstract idea on generic computers and/or generic computer components does not add significantly more, similar to how the recitation of the computer in Alice amounted to mere instructions to apply the abstract idea on a generic computer. The claims merely invoke the additional elements as tools that are being used in their ordinary capacity. Further, the courts have found that simply limiting the use of the abstract idea to a particular environment does not add significantly more. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide generic computer implementation.
Regarding claims 3 and 14, applicant recites The method of claim 1, further comprising: translating each automotive sensor data graph from a sensor coordinate system of the corresponding automotive sensor to a vehicle coordinate system.
However, a human could mentally or manually perform that calculation.
Regarding claims 4 and 15, applicant recites The method of claim 1, further comprising: generating each automotive sensor data graph using a feature extractor configured to extract one or more features of the corresponding automotive sensor.
However, a user can mentally or manually extract features from data.
Regarding claims 5 and 16, applicant recites The method of claim 1, further comprising: reducing the plurality of nodes and the plurality of edges of each automotive sensor data graph using one of an autoencoder, principal component analysis, clustering or distance-based grouping.
However, autoencoders, principal component analysis, clustering, and distance-based grouping are all machine-learning algorithms that, because they are reducible to math, a human can perform calculations for mentally or manually.
Regarding claims 6 and 17, applicant recites The method of claim 1, wherein the obtaining the plurality of automotive sensor data graphs comprises: receiving the sensor data captured by each automotive sensor; and generating each automotive sensor data graph based on the sensor data captured by the corresponding automotive sensor.
However, a user could mentally or manually gather this data and generate these graphs.
Regarding claims 7 and 18, applicant recites The method of claim 1, wherein the sensor distance relationship is one of a n-nearest-neighbors relationship, a fixed-radius-relationship and a weighted-distance-relationship.
However, specifying the relationship between nodes in a graph does not change that a user can perform the calculations pertaining to the graphs mentally or manually.
Regarding claims 8 and 19, applicant recites The method of claim 1, wherein the at least one calibration matrix is based on a calibration of two automotive sensors with regard to one another.
However, a user can mentally or manually calculate such a matrix.
Regarding claim 9, applicant recites The method of claim 1, wherein the generating the fused automotive sensor data graph comprises: generating each fusion edge between two sensor data nodes of two pluralities of sensor data nodes by determining a distance between the two sensor data nodes based on the at least one calibration matrix.
However, a user can mentally or manually generate such additional edges connecting graphs using such matrices.
Regarding claim 10, applicant recites The method of claim 1, wherein the obtaining the at least one calibration matrix comprises: obtaining a plurality of calibration matrices, each calibration matrix defining a transformation between respective pluralities of sensor data nodes of two respective automotive sensor data graphs.
However, a user can calculate multiple such matrices mentally or manually just like a user could calculate one such matrix mentally or manually.
Regarding claim 11, applicant recites The method of claim 10, wherein the generating the fused automotive sensor data graph comprises: generating each fusion edge between the two sensor data nodes of the two pluralities of sensor data nodes by determining a distance between the two sensor data nodes based on a corresponding calibration matrix of the plurality of calibration matrices.
However, a user can mentally or manually generate such additional edges connecting graphs using such matrices.
Regarding claim 12, applicant recites The method of claim 1, wherein the fusion distance relationship is one of a n-nearest-neighbors relationship, a fixed-radius-relationship and a weighted-distance-relationship.
However, specifying the relationship between nodes in a graph does not change that a user can perform the calculations pertaining to the graphs mentally or manually.
Examiner’s note to help applicant overcome the 101 rejections: applicant can overcome the 101 rejections by presenting a persuasive argument that applicant’s claims are analogous to Example 42, claim 1 of the USPTO’s Subject Matter Eligibility Examples 37 to 42. Claim 1 of Example 42 is a medical data updating system that gathers non-standardized data from different sources and standardizes the data into a single standardized format before presenting the standardized data to a user. Despite being directed to data gathering and processing, Claim 1 of Example 42 was nevertheless found to be eligible because, “the additional elements recite a specific improvement over prior art systems by allowing remote users to share information in real time in a standardized format regardless of the format in which the information was input by the user” (See at least Example 42, Claim 1, in the USPTO’s Subject Matter Eligibility Examples 37 to 42).
Likewise, applicant can argue that applicant’s claims are also eligible because the additional elements of applicant’s claims amount to a specific improvement over prior art systems by allowing a vehicle perception system to obtain information from various different sensors in a standardized format (i.e., a fused graph) regardless of the format in which the information was first gathered by the respective sensor. In making such an argument, applicant should be sure to explicitly cite to Example 42, Claim 1, in order to make the analogy as clear as possible.
If applicant persuasively makes such an argument, then examiner will withdraw the 101 rejections.
Allowable Subject Matter
Claims 1-20 are objected to for containing allowable subject matter, but would be allowable if the objections and rejections discussed in previous sections of this office action are resolved.
The closest prior art of record is Wu et al. (CN115861755A), hereinafter referred to as Wu. See the attached English translation of Wu for page numbers. The following is a statement of reasons for the indication of allowable subject matter:
Regarding claims 1 and 13, Wu discloses An automotive control unit (See at least Fig. 4 in Wu: Wu discloses a schematic block diagram of an example electronic device 40 that may be used to implement embodiments of the present disclosure [See at least Wu, Page 12]), comprising:
at least one processing unit (See at least Fig. 4 in Wu: Wu discloses that the electronic device 40 includes a computing unit 410, which can perform calculations according to a computer program stored in a read-only memory (ROM) 420 or a computer program loaded from a storage unit 480 into a random access memory (RAM) 430 [See at least Wu, Page 12]); and
a memory coupled to the at least one processing unit and configured to store machine-readable instructions (See at least Fig. 4 in Wu: Wu discloses that the electronic device 40 includes a computing unit 410, which can perform calculations according to a computer program stored in a read-only memory (ROM) 420 or a computer program loaded from a storage unit 480 into a random access memory (RAM) 430 [See at least Wu, Page 12]),
wherein the machine-readable instructions cause the at least one processing unit to (See at least Fig. 4 in Wu: Wu discloses that the electronic device 40 includes a computing unit 410, which can perform calculations according to a computer program stored in a read-only memory (ROM) 420 or a computer program loaded from a storage unit 480 into a random access memory (RAM) 430 [See at least Wu, Page 12]):
obtain a plurality of automotive sensor data graphs (See at least Fig. 2 in Wu: Wu discloses Graph Projection, which is to project the bird's-eye view features of the image to the bird's-eye view knowledge map of the image, and project the bird's-eye view features of the point cloud to the bird's-eye view knowledge map of the point cloud [See at least Wu, Page 12]), each automotive sensor data graph being based on sensor data captured by a corresponding automotive sensor of a plurality of automotive sensors (See at least Fig. 2 in Wu: Wu discloses that Camera BEV represents the bird's-eye view of the image corresponding to the camera [See at least Wu, Page 9]. Xcam represents the image bird's-eye view feature [See at least Wu, Page 9]. Lidar BEV represents the bird's-eye view of the image corresponding to the radar, and Xlid represents the bird's-eye view feature of the point cloud [See at least Wu, Page 9]), wherein:
each automotive sensor data graph comprises a plurality of sensor data nodes and a plurality of sensor data edges (See at least Fig. 2 in Wu: Wu discloses Graph Projection, which is to project the bird's-eye view features of the image to the bird's-eye view knowledge map of the image, and project the bird's-eye view features of the point cloud to the bird's-eye view knowledge map of the point cloud [See at least Wu, Page 12]),
each sensor data node includes sensor data captured by the corresponding automotive sensor (Wu discloses that The image bird's-eye view knowledge map features determined based on image bird's-eye view features and the point cloud bird's-eye view knowledge map features determined based on point cloud bird's-eye view features are both composed of the characteristics of each node included, and the features of each node can integrate the pixel-level information in the feature map [See at least Wu, Page 3]. Wu further discloses that Point cloud bird's-eye view features are also collected for a 360-degree omni-directional environment, and contain a large number of features [See at least Wu, Page 3]. Wu further discloses that By determining the characteristics of the knowledge graph and processing the node characteristics in the knowledge graph as a unit, the amount of data processing can be greatly reduced and the processing speed can be improved [See at least Wu, Page 3]), and
each sensor data edge defines a sensor distance relationship between two sensor data nodes of the plurality of sensor data nodes (See at least Fig. 2 in Wu: Wu discloses Graph Projection, which is to project the bird's-eye view features of the image to the bird's-eye view knowledge map of the image, and project the bird's-eye view features of the point cloud to the bird's-eye view knowledge map of the point cloud [See at least Wu, Page 12]. It will be appreciated that, since the edges exist as depicted, they must have values associated with them);
obtain at least one calibration matrix, the at least one calibration matrix defining a transformation between respective pluralities of sensor data nodes of at least two automotive sensor data graphs (Wu discloses that determining the bird's-eye view knowledge map feature based on the bird's-eye view feature of the image can be expressed by the following formula 1: Pcam=ZcamXcamWcam; ... formula one [See at least Wu, Page 3]. Wu further discloses that Zcam represents the introduced projection matrix, which is used to project the image bird's-eye view feature to the image bird's-eye view knowledge map feature [See at least Wu, Page 3]. Wu further discloses that determination of the point cloud bird's-eye view knowledge map feature based on the point cloud bird's-eye view feature can be expressed by the following formula 2: Plid=ZlidXlidWlid; ... formula two [See at least Wu, Page 3]. Wu further discloses that Zlid represents the introduced projection matrix, which is used to project the point cloud bird's-eye view feature to the point cloud bird's-eye view knowledge map feature [See at least Wu, Page 4]);
generate a fused automotive sensor data graph based on the plurality of automotive sensor data graphs and the at least one calibration matrix (Wu discloses that the fused features are determined based on the image bird's-eye view knowledge map feature and the point cloud bird's-eye view knowledge map feature [See at least Wu, Page 4]. It will be appreciated from the formulas discussed earlier that this does involve the calibration matrices).
However, none of the prior art of record, taken either alone or in combination, teaches or suggest the automotive control unit wherein:
the fused automotive sensor data graph comprises a plurality of fused automotive sensor data nodes and a plurality of fused automotive sensor data edges,
the plurality of fused automotive sensor data nodes includes all pluralities of sensor data nodes,
the plurality of fused automotive sensor data edges comprises all pluralities of sensor data edges and a plurality of fusion edges, and
each fusion edge defines a fusion distance relationship be-tween two sensor data nodes of two pluralities of sensor data nodes; and
provide the fused automotive sensor data graph to an automotive perception function, the automotive perception function being implemented by a graph neural network and being configured to generate perception data based on the fused automotive sensor data graph.
In order for a reference to read on the above missing limitations, the reference would have to teach where the result of the fusion between multiple graphs, each representing a different sensor, includes all of the nodes of the constituent graphs before the fusion, as well as additional “fusion” edges connecting various of the original nodes to each other. This particular kind of fusion is not taught by the prior art of record.
The closest that any reference comes is Wu, since Wu discloses that transformation matrices for converting between different sensors may be used (Wu discloses that determining the bird's-eye view knowledge map feature based on the bird's-eye view feature of the image can be expressed by the following formula 1: Pcam=ZcamXcamWcam; ... formula one [See at least Wu, Page 3]. Wu further discloses that Zcam represents the introduced projection matrix, which is used to project the image bird's-eye view feature to the image bird's-eye view knowledge map feature [See at least Wu, Page 3]. Wu further discloses that determination of the point cloud bird's-eye view knowledge map feature based on the point cloud bird's-eye view feature can be expressed by the following formula 2: Plid=ZlidXlidWlid; ... formula two [See at least Wu, Page 3]. Wu further discloses that Zlid represents the introduced projection matrix, which is used to project the point cloud bird's-eye view feature to the point cloud bird's-eye view knowledge map feature [See at least Wu, Page 4]) which allow data from graphs representing different sensors to be merged with each other to form fused data points (Wu discloses that the fused features are determined based on the image bird's-eye view knowledge map feature and the point cloud bird's-eye view knowledge map feature [See at least Wu, Page 4]).
However, Wu is silent as to the outcome of the fusion itself being a graph. Moreover, Wu is silent as to retaining all of the original nodes and edges from all original graphs in the resultant graph. Moreover, Wu is silent as to creating any additional edges linking any of the original constituent graphs together as part of the fusion process while retaining all of the original nodes and edges in the final result. Therefore, Wu fails to teach or suggest the above missing limitation.
None of the other prior art of record resolves these deficiencies of Wu.
For at least the above stated reasons, claims 1-20 contain allowable subject matter.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAEEM T ALAM whose telephone number is (571)272-5901. The examiner can normally be reached M-F, 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FADEY JABR can be reached at (571) 272-1516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NAEEM TASLIM ALAM/Examiner, Art Unit 3668