Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The current application filed on Apr. 16, 2024.
Election/Restrictions
Applicant’s election Invention II- Claims 6-20 drawn to a system and a processor for performing operation based on the output data, without traverse in the reply filed on 01/07/2026 is acknowledged. Applicant added new claims 21-25 into Invention II that are entered. Claims 1-5 are considered as non-elected and been canceled.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
Claim 6-25 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claim 1-20 of U.S. Patent No. 12306298 and claims 1-20 of US Patent No. 12332349. Although the conflicting claims are not identical, they are not patentably distinct from each other because CLAIM 1, e.g., is generic to all that is recited in claim 1, e.g., of US Patent No.12306298 and in claim 1, e.g., of US Pat. No.12332349.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 6-25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Independent Claims as shown:
Claim 6. A system comprising: one or more processors to:
obtain sensor data generated obtained using one or more ultrasonic sensors of one or more machines;
generate input data based at least on augmenting at least a portion of the sensor data;
generate, based at least on one or more machine learning models processing the input data, output data representative of one or more locations associated with one or more objects or features; and
perform one or more operations based at least on the output data.
Claim 18. One or more processors comprising: processing circuitry to cause performance of one or more operations based at least on output data generated using one or more machine learning models processing input data, wherein at least a first portion of the input data is generated using first sensor data generated obtained using one or more ultrasonic sensors and at least a second portion of the input data is generated using one or more outputs from one or more neural networks processing second sensor data corresponding to the first sensor data.
Claim 21. A method comprising:
obtaining sensor data obtained using one or more ultrasonic sensors of one or more machines;
generating input data based at least on augmenting at least a portion of the sensor data;
generating, based at least on one or more machine learning models processing the input data, output data representative of one or more locations associated with one or more objects or features; and
performing one or more operations based at least on the output data.
101 Analysis - Step 1: Statutory category – Yes
The claims recite a system/ method including at least one step. The claim falls within one of the four statutory categories. MPEP 2106.03
101 Analysis - Step 2A Prong one evaluation: Judicial Exception – Yes – Mental processes.
In Step 2A, Prong one of the 2019 Patent Eligibility Guidance (PEG), a claim is to be analyzed to determine whether it recites subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) mental processes, and/or c) certain methods of organizing human activity.
The Office submits that the foregoing bolded limitation(s) constitutes judicial exceptions in terms of “mental processes” because under its broadest reasonable interpretation, the limitations can be “performed in the human mind, or by a human using a pen and paper”. See MPEP 2106.04(a)(2)(III)
The claim recites the limitation of, e.g., claim 6, obtain sensor data; generate input data; generate output data are simple processing that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of “one or more processors”. That is, other than reciting by the processor, nothing in the claim elements precludes the step from practically being performed in the mind. For example, but for the processor language, the claim encompasses a person looking at data collected and forming a simple judgement. The mere nominal recitation of by the processor does not take the claim limitations out of the mental process grouping.
Thus, the claims recite a mental process.
101 Analysis - Step 2A Prong two evaluation: Practical Application – No
In Step 2A, Prong two of the 2019 PEG, a claim is to be evaluated whether, as a whole, it integrates the recited judicial exception into a practical application. As noted in MPEP 2106.04(d), it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. The courts have indicated that additional elements such as: merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
The Office submits that the foregoing bolded limitation(s) recite additional elements that do not integrate the recited judicial exception into a practical application.
The claim recites additional elements or steps of “obtain sensor data …using one or more ultrasonic sensors of one or more machines; generate, based at least on one or more machine learning models processing the input data, output data …; and perform one or more operations based at least on the output data.
The obtaining steps from the ultrasonic sensors and generating output data based on one or more machine learning models are recited at a high level of generality (i.e. as a general means of gathering vehicle and road condition data) which amount to mere data gathering that is a form of insignificant extra-solution activity. The perform one or more operations based at least on the output data is also recited at a high level of generality which amounts to mere post solution that is a form of insignificant extra-solution activity.
The “system comprises one or more processors” to gather data information and perform a post solution based on the data gathering by using sensor and machine learning models which merely describes how to generally “apply” the mental post solution adjustments that using a generic or general-purpose a system with a processor, i.e. a computer and well-known components.
Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
101 Analysis - Step 2B evaluation: Inventive concept – No
In Step 2B of the 2019 PEG, a claim is to be evaluated as to whether the claim, as a whole, amounts to significantly more than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05.
As discussed with respect to Step 2A Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception on a generic computer cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B. Here, the obtaining, generating steps and the perform one or more operations step were considered to be insignificant extra-solution activity in Step 2A, and thus they are re-evaluated in Step 2B to determine if they are more than what is well-understood, routine, conventional activity in the field. The background recites that the sensors are all conventional sensors mounted on the vehicle, and the claims’ specification do not provide any indication that the system is anything other than a conventional computer with well-known sensor and machine learning models. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). Further, the Federal Circuit in Trading Techs. Int’l v. IBG LLC, 921 F.3d 1084, 1093 (Fed. Cir. 2019), and Intellectual Ventures I LLC v. Erie Indemnity Co., 850 F.3d 1315, 1331 (Fed. Cir. 2017). Accordingly, a conclusion that the collecting step is well-understood, routine, conventional activity is supported under Berkheimer.
Thus, the claim is ineligible.
Dependent Claims
Dependent claims(s) 7-17, 19-20 & 22-25 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of the dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claims 7-17, 19-20 & 22-25 are not patent eligible under the same rationale as provided for in the rejection of 6, 18 & 21.
Therefore, claim(s) 6-25 is/are ineligible under 35 USC §101.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 6-20 & 25 is rejected under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph, because the claim purports to invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, but fails to recite a combination of elements as required by that statutory provision and thus cannot rely on the specification to provide the structure, material or acts to support the claimed function. As such, the claim recites a function that has no limits and covers every conceivable means for achieving the stated function, while the specification discloses at most only those means known to the inventor. Accordingly, the disclosure is not commensurate with the scope of the claim.
Regarding claim 6 & 18, recite, e.g., claim 6 “A system comprising: one or more processors to: …” which contains a single means claim that is an undue breadth (see MPEP 2164.08(a)).
The specification does not disclose any embodiment that operates only circuits and one processor (i.e. claim 18, “one or more processors and/or circuits”).
In addition, the claim fails to define “ultrasonic sensors” and “machine learning models” are related to the processor.
Regarding claims 12, recites “causing a machine to navigate according to the trajectory” is indefinite. The claim fails to define how and what manner the system with the processor and “a machine” related. Claim 19 has this same issue.
Regard claims 16 & 25, recite (e.g., claim 16) “wherein the generation of the input data comprises: determining one or more first yaw angles associated with the one or more ultrasonic sensors when generating obtaining the sensor data; determining, based at least on the one or more first yaw angles, one or more second yaw angles; generating one or more projection matrices representative of at least the one or more second yaw angles; and generating the input data based at least on the sensor data and the one or more projection matrices” is unclear cause the input data is information data, thus it is not clear how and what manner the input data enable performing the “determining”, and “generating” steps..
Claims 7-17 & 19 depend upon rejected claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 6-25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Park (20210406560).
With regard to claim 6, Park discloses a system comprising sensors of a vehicle, and one or more processors to compute signals (2D or 3D) using machine learning models, see [0027]+; the one or more processors to:
obtain sensor data generated obtained using one or more ultrasonic sensors of one or more machines (sensor data received using ultrasonic sensors, LiDAR, RADAR, see [0027]+);
generate input data based at least on augmenting at least a portion of the sensor data (the sensor data generated for training DNNs, see [0028]+ & [0034]+);
generate, based at least on one or more machine learning models processing the input data, output data representative of one or more locations associated with one or more objects or features (generated output signals to compute the feature outputs form one or more feature extractor layer of respective DNNs, see [0028]; a fused output 122 generated to indicate a shape, orientation, classification, locations for objects or features in the environment, see [0034]-[0040]+); and
perform one or more operations based at least on the output data (the fused output 122 used by an autonomous driving software stack 124 to perform one or more operations by the vehicle, see [0055]+).
Although Park’s disclosure is not described as same world languages but Examiner interprets the vehicle’s system (as shown in Fig.1s and Fig.8) uses sensor system (which includes ultrasonic sensor) for obtaining sensor data, a fusion DNN 120 and a fused output 122 for computing output signals for controlling a vehicle are equivalent to the scope of the claim. For this reason, Park is obvious suggestively, if not anticipatory, of the claimed subject matter.
With regard to claim 7, Park teaches that the system of claim 6, wherein the generation of the input data comprises:
causing the sensor data to be associated with augmentation data representative of information associated with at least one of the one or more ultrasonic sensors or one or more histograms represented by the sensor data (the sensor data may undergo pre-processing (e.g., noise balancing, augmentation, and etc.), see [0030]-[0032]+);
generating, based at least on one or more neural networks processing the sensor data and the augmentation data, one or more outputs (generated output signals to compute the feature outputs form one or more feature extractor layer of respective DNNs, see [0028]; a fused output 122 generated to indicate a shape, orientation, classification, locations for objects or features in the environment, see [0034]-[0040]+); and
generating the input data based at least on the one or more outputs (the output of the fusion DNN 120 is generated the input data to the Fused Output 122, see [0039]+).
With regard to claims 8-10, Park teaches that the fusion DNN 120 generate the 3D signals including multiple CNNs, input layers which hold values representative of a volume (e.g., a width, a height, and etc.) see [0044]+ which meets the scope of claims 8-10.
With regard to claim 11, Park teaches that the system of claim 6, wherein the output data represents one or more maps indicating the one or more locations associated with the one or more objects or features, the one or more maps including at least one of: one or more height maps; one or more occupancy maps; or one or more distance maps (the fused output 122 generated the input channel indicating a shape, orientation, and classification for objects or features with location in the environment, see [0034]-[0035]+).
With regard to claim 12, Park teaches that the system of claim 6, wherein the performance of the one or more operations comprises: determining a trajectory based at least on the one or more locations associated with the one or more objects or features; and causing a machine to navigate according to the trajectory (Fig. 2A represents objects detections 206A, 206B, and etc. with bounding shapes indicating a location, shape, where the point in the bounding shape indicating directions, see [0036]+).
With regard to claim 13, Park teaches that the system of claim 6, wherein the performance of the one or more operations comprises: determining, based at least on the one or more locations of the one or more objects or features and one or more second locations for the one or more objects or features as represented by ground truth data, one or more losses; and updating, based at least on the one or more losses, one or more parameters associated with the one or more machine learning models (the drive stack 124 includes a world model manager that used to generate, update and define a world model, see [0055]+).
With regard to claim 14, Park teaches that the system of claim 6, wherein: the sensor data represents one or more first histograms; and the generation of the input data comprises: generating, based at least on adding noise to the one or more first histograms, second sensor data representative of one or more second histograms; and generating the input data based at least the second sensor data (the sensor data representing sensory fields of sensors, e.g., a value graph for ultrasonic sensors, see [0030 & a lane graph represents the path or paths available to the vehicle, wherein the graph is equivalent to the histograms, see [0057]+).
With regard to claims 15-16, Park teaches that the system of claim 6, wherein the generation of the input data comprises: determining one or more first poses associated with the one or more ultrasonic sensors when generating obtaining the sensor data; determining, based at least on the one or more first poses, one or more second poses associated with the one or more ultrasonic sensors; and generating the input data based at least on the sensor data and the one or more second poses (a plane planner uses the lane graph, object poses with the lane graph and a target point are mapped to the best matching drivable point and direction in the lane graph, see [0061]+).
With regard to claim 17, Park teaches that the system related to machine learning based sensor fusion for autonomous machine applications, see [0025] which meets the scope of the claim.
With regard to claim 18, Park discloses a system comprising sensors of a vehicle, and one or more processors to compute signals (2D or 3D) using machine learning models, see [0027]+, the one or more processors perform:
obtain sensor data generated obtained using one or more ultrasonic sensors of one or more machines (sensor data received using ultrasonic sensors, LiDAR, RADAR, see [0027]+);
generate input data based at least on augmenting at least a portion of the sensor data (the sensor data generated for training DNNs, see [0028]+ & [0034]+);
generate, based at least on one or more machine learning models processing the input data, output data representative of one or more locations associated with one or more objects or features (generated output signals to compute the feature outputs form one or more feature extractor layer of respective DNNs, see [0028]; a fused output 122 generated to indicate a shape, orientation, classification, locations for objects or features in the environment, see [0034]-[0040]+); and
perform one or more operations based at least on the output data (the fused output 122 used by an autonomous driving software stack 124 to perform one or more operations by the vehicle, see [0055]+).
Although Park’s disclosure is not described as same world languages but Examiner interprets the vehicle’s system (as shown in Fig.1s and Fig.8) uses sensor system (which includes ultrasonic sensor) for obtaining sensor data, a fusion DNN 120 and a fused output 122 for computing output signals for controlling a vehicle are equivalent to the scope of the claim. For this reason, Park is obvious suggestively, if not anticipatory, of the claimed subject matter.
With regard to claim 19, Park teaches that the one or more processors of claim 18, wherein the one or more operations comprise one or more of: causing a machine to navigate along a trajectory that is determined based at least on the output data; or updating one or more parameters associated with the one or more machine learning models based at least on the output data and ground truth data associated with the first sensor data (one or more of the players, features and the drive stack 124 uses the fused output122 to generate outpus for controlling, actuation to aid the ego-machine vehicle in navigating the environment, see [0065]+.
With regard to claim 20, Park teaches that the system related to machine learning based sensor fusion for autonomous machine applications, see [0025] which meets the scope of the claim.
With regard to claim 21, Park discloses a system comprising sensors of a vehicle, and one or more processors to compute signals (2D or 3D) using machine learning models, see [0027]+; the one or more processors performs method:
obtain sensor data generated obtained using one or more ultrasonic sensors of one or more machines (sensor data received using ultrasonic sensors, LiDAR, RADAR, see [0027]+);
generate input data based at least on augmenting at least a portion of the sensor data (the sensor data generated for training DNNs, see [0028]+ & [0034]+);
generate, based at least on one or more machine learning models processing the input data, output data representative of one or more locations associated with one or more objects or features (generated output signals to compute the feature outputs form one or more feature extractor layer of respective DNNs, see [0028]; a fused output 122 generated to indicate a shape, orientation, classification, locations for objects or features in the environment, see [0034]-[0040]+); and
perform one or more operations based at least on the output data (the fused output 122 used by an autonomous driving software stack 124 to perform one or more operations by the vehicle, see [0055]+).
Although Park’s disclosure is not described as same world languages but Examiner interprets the vehicle’s system (as shown in Fig.1s and Fig.8) uses sensor system (which includes ultrasonic sensor) for obtaining sensor data, a fusion DNN 120 and a fused output 122 for computing output signals for controlling a vehicle are equivalent to the scope of the claim. For this reason, Park is obvious suggestively, if not anticipatory, of the claimed subject matter.
With regard to claim 22, Park teaches that the method of claim 21, wherein the generating the input data comprises:
causing the sensor data to be associated with augmentation data representative of information associated with at least one of the one or more ultrasonic sensors or one or more histograms represented by the sensor data (the sensor data may undergo pre-processing (e.g., noise balancing, augmentation, and etc.), see [0030]-[0032]+);
generating, based at least on one or more neural networks processing the sensor data and the augmentation data, one or more outputs (generated output signals to compute the feature outputs form one or more feature extractor layer of respective DNNs, see [0028]; a fused output 122 generated to indicate a shape, orientation, classification, locations for objects or features in the environment, see [0034]-[0040]+); and
generating the input data based at least on the one or more outputs (the output of the fusion DNN 120 is generated the input data to the Fused Output 122, see [0039]+).
With regard to claims 23, Park teaches that the method of claim 21, wherein: the sensor data represents one or more first histograms; and the generating the input data comprises: generating, based at least on adding noise to the one or more first histograms, second sensor data representative of one or more second histograms; and generating the input data based at least the second sensor data (the sensor data representing sensory fields of sensors, e.g., a value graph for ultrasonic sensors, see [0030 & a lane graph represents the path or paths available to the vehicle, wherein the graph is equivalent to the histograms, see [0057]+).
With regard to claims 24-25, Park teaches that the method of claim 1, wherein the generating the input data comprises: determining one or more first poses associated with the one or more ultrasonic sensors when obtaining the sensor data; determining, based at least on the one or more first poses, one or more second poses associated with the one or more ultrasonic sensors; and generating the input data based at least on the sensor data and the one or more second poses (a plane planner uses the lane graph, object poses with the lane graph and a target point are mapped to the best matching drivable point and direction in the lane graph, see [0061]+).
Prior Arts Cited
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Philbin (20210101624) discloses a collision avoidance system which validates, rejects or replaces a trajectory generated to control a vehicle. The system uses ultrasonic sensors for obtaining and detecting objects surround the vehicle and generating sensor data for analyzing to avoid collision of the objects (see the summary).
Costea (20230095410) discloses a system for detecting objects in an environment (see the abstract).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NGA X NGUYEN whose telephone number is (571)272-5217. The examiner can normally be reached M-F 5:30AM - 2:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JELANI SMITH can be reached at 571-270-3969. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
NGA X. NGUYEN
Examiner
Art Unit 3662
/NGA X NGUYEN/Primary Examiner, Art Unit 3662