DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 12-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) the abstract ideas as explained in the Step 2A, Prong | analysis below. This judicial exception is not integrated into a practical application as explained in Step 2A, Prong 2 analysis below .The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as explained in Step 2B analysis below.
STEP 2A, PRONG I:
Step 2A, prong 1, of the 2019 Guidance, first looks to whether the claim recites any judicial exceptions, including certain groupings of abstract ideas (i.e., mathematical concepts, certain methods of organizing human activities such as a fundamental economic practice, or mental processes). 84 Fed. Reg. at 52-54.
The method of claim 12, 17, and 22 is directed to the limitations “providing pixels of a radar image that are assigned to the object; and providing a point cloud, wherein the point cloud includes at least one point that represents a radar reflection assigned to the object, through at least one property assigned to the object; extracting first features that characterize the object from the pixels; extracting second features that characterize the object from the point cloud; and determining the classification of the object depending on the first features and the second features” amount to a mental process, performable in the human mind or using pen and paper. Note that the “point cloud, first features and second features” which forms the basis for the claimed processing need not be particularly complex. As such, claim 12, 17, and 22 recites an abstract idea.
The method of claim 13 and 20 is directed to the limitations “the pixels are mapped to the first features, wherein the point cloud is mapped to the second features, and wherein an input variable is determined depending on the first features and the second features, wherein the input variable is mapped to the classification” amount to a mental process, performable in the human mind or using pen and paper. Note that the “point cloud, first features and second features” which forms the basis for the claimed processing need not be particularly complex. As such, claim 13 and 20 recites an abstract idea.
The method of claim 16 and 19 is directed to the limitations “a signal for controlling at least one actuator is determined depending on the classification” amount to a mental process, performable in the human mind or using pen and paper. Note that the “point cloud, first features and second features” which forms the basis for the claimed processing need not be particularly complex. As such, claim 16 and 19 recites an abstract idea.
STEP 2A, PRONG 2:
Step 2A, prong 2, of the 2019 Guidance, next analyzes whether the claims recite additional elements that individually or in combination integrate the judicial exception into a practical application. 2019 Guidance, 84 Fed. Reg. at 53-55. The 2019 Guidance identifies considerations indicative of whether an additional element or combination of elements integrate the judicial exception into a practical application, such as an additional element reflecting an improvement in the functioning of a computer or an improvement to other technology or technical field. Id. at 55; MPEP § 2106.05(a).
In addition to reciting the above-noted abstract ideas, the issue is whether the claims as a whole including various additional elements integrate the abstract ideas into a practical application. In other words, do the claims as a whole produce any meaningful limits, i.e. improvement in technology?
Computer implemented (claim 12), a first neural network trained for mapping the pixels to the first features, a second neural network trained to map the point cloud to the second features , and a third neural network trained to map the input variable to the classification (claim 13), a device (claim 17) a non-transitory medium (claim 22),
The additional limitations are directed data gathering and data processing and therefore, None of the additional limitations provide a meaningful limit on the claim invention. Rather, the additional limitations are directed data gathering and data processing which is an extra-solution activity.
Regarding claim 14 and 21, “the first neural network and the second neural network and the third neural network are trained independently of one another or that at least two of the first, second, and third neural networks are trained jointly” is insignificant pre-solution activity. Claim recitations that insignificant pre-solution activity do not integrate the judicial exception into a practical application.
Regarding claim 15 and 18, “raw data for determining the radar image are sensed by at least one sensor, and wherein the radar image is determined depending on the raw data sensed.” is merely data gathering. Claim recitations that generical data gather data do not integrate the judicial exception into a practical application.
STEP 2B:
Under step 2B of the 2019 Guidance, the issue is whether the claims adds any specific limitations beyond the judicial exception that, either alone or as an ordered combination, amount to more than “well-understood, routine, conventional” activity in the field. 84 Fed. Reg. at 56; MPEP § 2106.05(d).
The issue is whether the claims as a whole including the additional limitations, as an ordered combination, amount to more than “well-understood, routine, conventional” activity in the field. In other words, the issue is whether the additional elements in combination (as well as individually) amount to an inventive concept.
Again, the additional limitations are directed to mere data gathering and data processing which is “well-understood, routine, and conventional’ activity in the field. Thus, the additional limitations alone or in combination do not amount to an inventive concept.
Overall all the claims are directed to a three dimensional deformation field modeling, which is in and of itself an abstract idea because said modeling is a mental process and perhaps data manipulation thus possibly extra-solution activity. Again, a claim for a useful or beneficial abstract idea is still an abstract idea. See Ariosa Diagnostics, Inc. v. Sequenom, Inc., 788 F.3d 1371, 1379-80 (Fed. Cir. 2015). As such, the ordered combination of features is directed solely to abstract ideas or extra-solution activity as discussed supra.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 12-22 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Yoo et al (US 20200294257).
Regarding claim 12, 17, and 22, Yoo teaches a computer-implemented method for determining a classification for an object (abs, “fuse 2D and 3D object detection results for classifying objects”), the method comprising the following steps: providing pixels of a radar image that are assigned to the object; and providing a point cloud (abs, “a point cloud may be filtered to include only points”), wherein the point cloud includes at least one point that represents a radar reflection assigned to the object (para 31), through at least one property assigned to the object (para 31); extracting first features that characterize the object from the pixels (para 31, “locations and corresponding image-space locations of LIDAR data, RADAR data, and/or other depth data may be known, or determined, using intrinsic and/or extrinsic parameters”); extracting second features that characterize the object from the point cloud (para 29 and 31 and 67, “one or more of the layers may include an input layer. The input layer may hold values associated with the input (e.g., vectors, tensors, etc. corresponding to sensor data, voxelized sensor data, feature vectors, etc.). For example, when the sensor data is an image(s), the input layer may hold values representative of the raw pixel values of the image(s) as a volume (e.g., a width, W, a height, H, and color channels, C (e.g., RGB), such as 32×32×3), and/or a batch size, B (e.g., where batching is use”); and determining the classification of the object depending on the first features and the second features (para 67 and claim 16).
Regarding claim 13 and 20, Yoo teaches the pixels are mapped to the first features using a first neural network trained for mapping the pixels to the first features, wherein the point cloud is mapped to the second features using a second neural network trained to map the point cloud to the second features (claim 16 and fig 6), and wherein an input variable is determined depending on the first features and the second features, wherein the input variable is mapped to the classification using a third neural network trained to map the input variable to the classification (para 47 and fig 6).
Regarding claim 14 and 21, Yoo teaches the first neural network and the second neural network and the third neural network are trained independently of one another or that at least two of the first, second, and third neural networks are trained jointly (para 28 and 30).
Regarding claim 15 and 18, Yoo teaches raw data for determining the radar image are sensed by at least one sensor, and wherein the radar image is determined depending on the raw data sensed (para 24).
Regarding claim 16 and 19, Yoo teaches a signal for controlling at least one actuator is determined depending on the classification (para 80 and 81, “The controller(s) 736 may provide the signals for controlling one or more components and/or systems of the vehicle 700 in response to sensor data received from one or more sensors (e.g., sensor inputs)”).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIMOTHY A BRAINARD whose telephone number is (571)272-2132. The examiner can normally be reached Monday - Friday 8:30 a.m.-5 p.m.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Kelleher can be reached at 571-272-7753. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
TIMOTHY A. BRAINARD
Primary Examiner
Art Unit 3648
/TIMOTHY A BRAINARD/Primary Examiner, Art Unit 3648