DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Summary
This communication is a First Office Action Non-Final Rejection on the merits.
Claims 1 – 20 are currently pending and considered below.
Information Disclosure Statement
The information disclosure statement filed on 22 May 2025 fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. It has been placed in the application file, but the information referred to therein has not been considered.
Applicant’s Information Disclosure Statements, filed from 22/May/2025 to 3/September/2025 have been received, and entered into the record. However, it is impractical for the examiner to review the references thoroughly with the number of references cited in this case. By initializing each of the cited references on the accompanying 1449 forms, the examiner is merely acknowledging the submission of the cited references and indicating that only a cursory review has been made of the cited references. MPEP § 2004.13 states: It is desirable to avoid the submission of long lists of documents if it can be avoided. Eliminate clearly irrelevant and marginally pertinent cumulative information. If a long list is submitted, highlight those documents which have been specifically brought to applicant’s attention and/or are known to be of most significance. See Penn Yan Boats, Inc. v. Sea Lark Boats, Inc., 359 F. Supp. 948, 175 USPQ 260 (S.D. Fla. 1972), aft 'd, 479 F.2d 1338, 178 USPQ 577 (Sth Cir. 1973), cert. denied, 414 U.S. 874 (1974). But cf. Molins PLC v. Textron Inc., 48 F.8d 1172, 33 USPQ2d 1823 (Fed. Cir. 1995).
The references in the Information Disclosure Statements seems like the references from parents application. However, the claimed invention is not the same as parent application and there is no way for the examiners to go through the extremely long lists of references to confirms the relevance of the references presented.
Further, it should be noted that an applicant’s duty of disclosure of material and information is not satisfied by presenting a patent examiner with "a mountain of largely irrelevant material from which he is presumed to have been able, with his experience and with adequate time, to have found the critical [material]. It ignores the real world conditions under which examiners work." Rohm & Haas Co. v. Crystal Chemical co., 722 F.2d 1556, 1573 [220 USPQ 289] (Fed. Cir. 1983), cert. Denied, 469 U.S. 851 (1984). Patent applicant has a duty not just to disclose pertinent prior art references but to make a disclosure in such a way as not to "bury" it within other disclosures of less relevant prior art; see Golden Valley Microwave Foods Inc. v. Weaver Popcorn Co. Inc., 24 USPQ2d 180i (N~D. Ind. 1992); Molins PLC v. Textron Inc., 26 USPQ2d 1889, at 1899 (D.Del 1992); Penn Yan Boats, Inc. v. Sea Lark Boats, Inc. et al., 175 USPQ 260, at 272 (S.D. Fl. 1972).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1 – 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Shashua et al. (Hereinafter Shashua) (US 2017/0010618 A1).
As per claim 1, Shashua teaches the limitations of:
an autonomous or semi-autonomous machine (See at least paragraph 6; systems and methods for autonomous vehicle navigation) comprising:
a plurality of sensors of two or more sensor modalities (See at least paragraph 511; navigation system 1700 may include one or more sensors, such as camera 122, GPS unit 1710, road profile sensor 1730, speed sensor 1720, and accelerometer 1725. Vehicle 1205 may include other sensors, such as radar sensors. The sensors included in vehicle 1205 may collect data related to road segment 1200 as vehicle 1205 travels along road segment 1200.);
one or more controllers (See at least paragraph 115; the control system may include at least one of a steering control, an acceleration control, and a braking control.);
one or more actuation components (See at least paragraph 138; FIG. 2F, vehicle 200 may include throttling system 220, braking system 230, and steering system 240.); and
one or more processors, the one or more processors comprising processing circuitry (See at least abstract and paragraph 266; navigation system for a vehicle may include at least one processor. The at least one processor may be programmed to determine a navigational maneuver for the vehicle based, at least in part, on a comparison of a motion of the vehicle with respect to a predetermined model representative of a road segment. The at least one processor may be further programmed to receive, from a camera, at least one image representative of an environment of the vehicle. The at least one processor may be further programmed to determine, based on analysis of the at least one image, an existence in the environment of the vehicle of an navigational adjustment condition, cause the vehicle to adjust the navigational maneuver based on the existence of the navigational adjustment condition, and store information relating to the navigational adjustment condition. … Both applications processor 180 and image processor 190 may include various types of processing devices. For example, either or both of applications processor 180 and image processor 190 may include a microprocessor, preprocessors (such as an image preprocessor), graphics processors, a central processing unit (CPU), support circuits, digital signal processors, integrated circuits, memory, or any other types of devices suitable for running applications and for image processing and analysis) to:
apply, to an end-to-end (E2E) neural network, sensor data obtained using the plurality of sensors (See at least paragraph 423 and 569; This module may look for edges in the image and assembles them together to form the lane marks. A second module may be used together with the bottom-up lane detection module. The second module is an end-to-end deep neural network, which may be trained to predict the correct short range path from an input image. … The system may take into account both the local shape of the lamppost and the arrangement of the lamppost in the scene: lampposts are typically at the side of the road (or on the divider), lampposts often appear more than once in a single image and at different sizes, Lampposts on highways may have fixed spacing based on country standards (e.g., around 25 m to 50 m spacing). The disclosed systems may use a convolutional neural network algorithm to classify a constant strip from the image (e.g., 136×72 pixels) that may be sufficient to catch almost all the street poles. The network may not contain any affine layers, and may only be composed of convolution layers, Max Pooling vertical layers and ReLu layers. The network's output dimension may be 3 times of the strip width, these three channels may have 3 degrees of freedom for each column in the strip. The first degree of freedom may indicate whether there is a street pole in this column, the second degree of freedom may indicate this pole's top, and the third degree of freedom may indicate its bottom. With the network's output results, the system may take all the local maximums that are above a threshold, and built rectangles bounding the poles.);
directly compute, based at least on the E2E neural network processing the sensor data, data representative of one or more trajectory points in three-dimensional (3D) world space (See at least paragraph 380 and 485; sparse map 800 may include representations of a plurality of target trajectories 810 for guiding autonomous driving or navigation along a road segment. Such target trajectories may be stored as three-dimensional splines. The target trajectories stored in sparse map 800 may be determined based on two or more reconstructed trajectories of prior traversals of vehicles along a particular road segment. A road segment may be associated with a single target trajectory or multiple target trajectories. For example, on a two lane road, a first target trajectory may be stored to represent an intended path of travel along the road in a first direction, and a second target trajectory may be stored to represent an intended path of travel along the road in another direction (e.g., opposite to the first direction). Additional target trajectories may be stored with respect to a particular road segment. For example, on a multi-lane road one or more target trajectories may be stored representing intended paths of travel for vehicles in one or more lanes associated with the multi-lane road. In some embodiments, each lane of a multi-lane road may be associated with its own target trajectory. In other embodiments, there may be fewer target trajectories stored than lanes present on a multi-lane road. In such cases, a vehicle navigating the multi-lane road may use any of the stored target trajectories to guides its navigation by taking into account an amount of lane offset from a lane for which a target trajectory is stored (e.g., if a vehicle is traveling in the left most lane of a three lane highway, and a target trajectory is stored only for the middle lane of the highway, the vehicle may navigate using the target trajectory of the middle lane by accounting for the amount of lane offset between the middle lane and the left-most lane when generating navigational instructions). … The principle underlying the maps generation is the integration of ego motion. The vehicles sense the motion of the camera in space (3D translation and 3D rotation). The vehicles or the server may reconstruct the trajectory of the vehicle by integration of ego motion over time, and this integrated path may be used as a model for the road geometry. This process may be combined with sensing of close range lane marks, and then the reconstructed route may reflect the path that a vehicle should follow, and not the particular path that it did follow. In other words, the reconstructed route or trajectory may be modified based on the sensed data relating to close range lane marks, and the modified reconstructed trajectory may be used as a recommended trajectory or target trajectory, which may be saved in the road model or sparse map for use by other vehicles navigating the same road segment.);
determine, using the one or more controllers, one or more controls to control the autonomous or semi-autonomous machine according to the one or more trajectory points (See at least paragraph 439 and 443; As vehicles 1205-1225 travel on road segment 1200, navigation information collected (e.g., detected, sensed, or measured) by vehicles 1205-1225 may be transmitted to server 1230. In some embodiments, the navigation information may be associated with the common road segment 1200. The navigation information may include a trajectory associated with each of the vehicles 1205-1225 as each vehicle travels over road segment 1200. In some embodiments, the trajectory may be reconstructed based on data sensed by various sensors and devices provided on vehicle 1205. For example, the trajectory may be reconstructed based on at least one of accelerometer data, speed data, landmarks data, road geometry or profile data, vehicle positioning data, and ego motion data. In some embodiments, the trajectory may be reconstructed based on data from inertial sensors, such as accelerometer, and the velocity of vehicle 1205 sensed by a speed sensor. In addition, in some embodiments, the trajectory may be determined (e.g., by a processor onboard each of vehicles 1205-1225) based on sensed ego motion of the camera, which may indicate three dimensional translation and/or three dimensional rotations (or rotational motions). The ego motion of the camera (and hence the vehicle body) may be determined from analysis of one or more images captured by the camera. …The autonomous vehicle road navigation model may use map data included in sparse map 800 for determining target trajectories along road segment 1200 for guiding autonomous navigation of autonomous vehicles 1205-1225 or other vehicles that later travel along road segment 1200. For example, when the autonomous vehicle road navigation model is executed by a processor included in a navigation system of vehicle 1205, the model may cause the processor to compare the trajectories determined based on the navigation information received from vehicle 1205 with predetermined trajectories included in sparse map 800 to validate and/or correct the current traveling course of vehicle 1205. ); and
send one or more control signals corresponding to the one or more controls to the one or more actuation components to cause the autonomous or semi-autonomous machine to navigate according to the one or more trajectory points (See at least paragraph 39 – 40; he recognized landmark may include at least one of a traffic sign, an arrow marking, a lane marking, a dashed lane marking, a traffic light, a stop line, a directional sign, a reflector, a landmark beacon, a lamppost, a change is spacing of lines on the road, or a sign for a business. The predetermined road model trajectory may include a three-dimensional polynomial representation of a target trajectory along the road segment. Navigation between recognized landmarks may include integration of vehicle velocity to determine a location of the vehicle along the predetermined road model trajectory. The processor may be further programmed to adjust the steering system of the vehicle based on the autonomous steering action to navigate the vehicle. The processor may be further programmed to: determine a distance of the vehicle from the at least one recognized landmark; and determine whether the vehicle is positioned on the predetermined road model trajectory associated with the road segment based on the distance. The processor may be further programmed to adjust the steering system of the vehicle to move the vehicle from a current position of the vehicle to a position on the predetermined road model trajectory when the vehicle is not positioned on the predetermined road model trajectory. … A method of navigating a vehicle may include receiving, from an image capture device associated with the vehicle, at least one image representative of an environment of the vehicle; analyzing, using a processor associated with the vehicle, the at least one image to identify at least one recognized landmark; determining a current position of the vehicle relative to a predetermined road model trajectory associated with the road segment based, at least in part, on a predetermined location of the recognized landmark; determining an autonomous steering action for the vehicle based on a direction of the predetermined road model trajectory at the determined current location of the vehicle relative to the predetermined road model trajectory; and adjusting a steering system of the vehicle based on the autonomous steering action to navigate the vehicle.).
As per claim 2, Shashua teaches the limitation of:
wherein the one or more processors include at least one of: one or more graphics processing units (GPUs), one or more central processing units (CPUs), or one or more hardware accelerators (See at least paragraph 266).
As per claim 3, Shashua teaches the limitation of:
wherein the autonomous or semi-autonomous machine further comprises one or more systems-on-a-chip (SOCs), and the one or more processors are included in the one or more SOCs (See at least paragraph 266 - 267).
As per claim 4, Shashua teaches the limitation of:
wherein the one or more trajectory points correspond to a turn or a lane change (See at least paragraph 517).
As per claim 5, Shashua teaches the limitation of:
wherein map data is further applied to the E2E neural network, and the data representative of the one or more trajectory points in 3D world space are directly computed further based at least on the E2E neural network processing the map data (See at least paragraph 12 and 540).
As per claim 6, Shashua teaches the limitation of:
wherein vehicle state data is further applied to the E2E neural network, and the data representative of the one or more trajectory points in 3D world space are directly computed further based at least on the E2E neural network processing the vehicle state data (See at least paragraph 422 – 423).
As per claim 7, Shashua teaches the limitation of:
wherein the plurality of sensors include at least two of: a LiDAR sensor; an image sensor; a SONAR sensor; a depth sensor; a microphone sensor; a RADAR sensor; or an ultrasonic sensor (See at least abstract, paragraph 330 and 506).
As per claim 8, Shashua teaches the limitation of:
wherein the autonomous or semi-autonomous machine is a passenger vehicle, a truck, a bus, a robot, a warehouse vehicle, a flying vessel, or a boat (See at least paragraph 6 and 88).
As per claim 10, Shashua teaches the limitation of:
wherein the autonomous or semi-autonomous machine further comprises one or more internal sensors having fields of view or sensory fields internal to the autonomous or semi-autonomous machine, and wherein the computing system or another computing system of the autonomous or semi-autonomous machine perform in-cabin monitoring of one or more passengers using second sensor data obtained using the one or more internal sensors (See at least paragraph 6 and 418).
Regarding claims 9 and 11 – 20:
Claims 9 and 11 – 20 are rejected using the same rationale, mutatis mutandis, applied to claims 1 – 8 above, respectively.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Levinson et al. (US 2019/0278292 A1) discloses mesh decimation based on semantic information.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IG T AN whose telephone number is (571)270-5110. The examiner can normally be reached M - F: 10:00AM- 4:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aniss Chad can be reached at (571) 270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/IG T AN/Primary Examiner, Art Unit 3662
IG T AN
Primary Examiner
Art Unit 3662