DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Status of Claims
This Office Action is in response to the application filed on 9/6/2024. Applicant has filed a provisional application and thus the domestic benefit of 9/13/2023 is the effective filing date. Claims 1-20 are presently pending and are presented for examination.
Information Disclosure Statement
The information disclosure statement (IDS) was submitted on 3/14/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The disclosure is objected to because of the following informalities:
Paragraph [0222] of the instant specification includes a typographical error, “…SAFEY ISSUES…” which should be corrected to instead state “…SAFETY ISSUES…”.
Appropriate correction is required.
Claim Interpretation
Claim 20 is to be interpreted as "…at least one of 'turning', 'parking', 'changing lanes', or 'moving through a campsite'…" and not as "…at least one of 'turning, parking, changing lanes, or moving' through a campsite…" per [0200] of the instant specification.
Claim Objections
Claims 1, 6-8, 10, 12, 17-19 are objected to because of the following informalities:
Claim 1 as currently presented states "...A computer-implemented method...the method..." which the Examiner recommends updating to instead state "...A computer-implemented method...the computer-implemented method..." so as to prevent potential misinterpretation.
Claim 7 and claim 8 as currently presented state "...the computer-implemented method...the method..." which the Examiner recommends updating to instead state "...the computer-implemented method...the computer-implemented method..." so as to prevent potential misinterpretation.
Claim 1 as currently presented states "...a generative artificial intelligence (AI) model trained...the trained generative AI model..."; claim 6 depends on claim 1 and states "...the generative AI model..." which the Examiner recommends updating to either clarify that the model of claim 6 has also been trained, or change other recitations of the model to be consistent throughout.
Similar to above, claim 8, claim 17, and claim 18 states "...the generative AI model...the trained generative AI model…" which the Examiner recommends updating to be consistent.
Claim 1 and claim 12 as currently presented states "...the driving assistance, directions, instructions, and/or indicators...the driving assistance and/or driving instructions..." which the Examiner recommends updating to state "...the driving assistance, the directions, the instructions, and/or the indicators...the driving assistance and/or the driving instructions..." or the like.
Claim 6 as currently presented states "...the driver…that particular driver…” which the Examiner recommends updating to state "...the driver…the driver…”.
Claim 10 as currently presented states "...that particular vehicle…that particular trailer…” which the Examiner recommends updating to state either "...a particular vehicle…a particular trailer…” or similarly "...the vehicle…the trailer…”.
Claim 19 as currently presented states "...that particular driver…that particular driver…” which the Examiner recommends updating to state either "...a particular driver…the particular driver…” or similarly "...the driver…the driver…”.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 5-6 and 12-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 5 and claim 16 as currently presented states “…and/or (vii) other sources…” which is open-ended, and does not convey the extent of what is being claimed. The Examiner recommends updating the claim to definitively and succinctly state what is being claimed.
Claim 12 as currently presented states “…and/or other electronic or electrical components…” which is open-ended, and does not convey the extent of what is being claimed. The Examiner recommends updating the claim to definitively and succinctly state what is being claimed.
Claims 6, 13-15, and 17-20 are also rejected since the claims are dependent on a previously rejected claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3, 5-12, 14, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Jiang et al. (US-2021/0291862; hereinafter Jiang) in view of Zhang et al. (US-2024/0185523; hereinafter Zhang).
Regarding claim 1, Jiang discloses a computer-implemented method for providing driving assistance and/or driving instructions (see Jiang at least Abs and [0079]), the method comprising:
inputting sensor data into a … model trained to provide driving assistance, directions, instructions, and/or indicators for a vehicle … based at least in part on the sensor data (see Jiang at least [0030]-[0031] "Server 103 may be a data analytics system to perform data analytics services for a variety of clients. In one embodiment, data analytics system 103 includes data collector 121 and machine learning engine 122. Data collector 121 collects driving statistics 123 from a variety of vehicles, either autonomous vehicles or regular vehicles driven by human drivers. Driving statistics 123 include information indicating the driving commands (e.g., throttle, brake, steering commands) issued and responses of the vehicles (e.g., speeds, accelerations, decelerations, directions) captured by sensors of the vehicles at different points in time. Driving statistics 123 may further include information describing the driving environments at different points in time, such as, for example, routes (including starting and destination locations), MPOIs, road conditions, weather conditions, etc. Based on driving statistics 123, machine learning engine 122 generates or trains a set of rules, algorithms, and/or predictive models 124 for a variety of purposes. In one embodiment, algorithms 124 may include algorithms used by a model predictive controller of the present disclosure. Algorithms 124 can then be uploaded on ADVs to be utilized during autonomous driving in real-time.");
generating, via the … model, the driving assistance, directions, instructions, and/or indicators for the vehicle … based at least in part on the sensor data (see Jiang at least [0031] "Based on driving statistics 123, machine learning engine 122 generates or trains a set of rules, algorithms, and/or predictive models 124 for a variety of purposes. In one embodiment, algorithms 124 may include algorithms used by a model predictive controller of the present disclosure. Algorithms 124 can then be uploaded on ADVs to be utilized during autonomous driving in real-time."); and
providing, via the … model and/or an associated user interface or user device, the driving assistance, directions, instructions, and/or indicators for the vehicle (see Jiang at least [0031]) … to at least one of:
(i) a driver of the vehicle that is driving the vehicle, or
(ii) the vehicle … to facilitate providing the driving assistance and/or driving instructions to drivers and/or vehicles (see Jiang at least [0072] "As discussed, learning based MPC module can adjust dynamically (e.g., while the ADV is driving) to account for real-time environmental conditions of the ADV. These environmental conditions can be gathered from servers 104 and 103, as well as localization module 301, map and route information 311, sensor system 115, and other modules. After adjusting to the environment, and accounting for physical attributes of the ADV, the MPC module can generate and optimized control command (e.g., throttle, steering, and/or brake) to be communicated to the control system 111.").
However, Jiang does not explicitly disclose the following:
…inputting sensor data into a generative artificial intelligence (AI) model … for a vehicle towing a trailer…
…generating, via the trained generative AI model … for the vehicle towing the trailer…
…the trained generative AI model … the vehicle towing the trailer…
Zhang, in the same field of endeavor, teaches the following:
…inputting sensor data into a generative artificial intelligence (AI) model … for a vehicle towing a trailer (see Zhang at least [0096] "The systems and methods described herein may be used by, without limitation, non-autonomous vehicles, semi-autonomous vehicles (e.g., in one or more adaptive driver assistance systems (ADAS)), piloted and un-piloted robots or robotic platforms, warehouse vehicles, off-road vehicles, vehicles coupled to one or more trailers, flying vessels, boats, shuttles, emergency response vehicles, motorcycles, electric or motorized bicycles, aircraft, construction vehicles, underwater craft, drones, and/or other vehicle types. Further, the systems and methods described herein may be used for a variety of purposes, by way of example and without limitation, for machine control, machine locomotion, machine driving, synthetic data generation, generative AI, model training, perception, augmented reality, virtual reality, mixed reality, robotics, security and surveillance, simulation and digital twinning, autonomous or semi-autonomous machine applications, deep learning, environment simulation, object or actor simulation and/or digital twinning, data center processing, conversational AI, light transport simulation (e.g., ray-tracing, path tracing, etc.), collaborative content creation for 3D assets, cloud computing and/or any other suitable applications.")…
…generating, via the trained generative AI model … for the vehicle towing the trailer (see Zhang at least [0096] "The systems and methods described herein may be used by, without limitation, non-autonomous vehicles, semi-autonomous vehicles (e.g., in one or more adaptive driver assistance systems (ADAS)), piloted and un-piloted robots or robotic platforms, warehouse vehicles, off-road vehicles, vehicles coupled to one or more trailers, flying vessels, boats, shuttles, emergency response vehicles, motorcycles, electric or motorized bicycles, aircraft, construction vehicles, underwater craft, drones, and/or other vehicle types. Further, the systems and methods described herein may be used for a variety of purposes, by way of example and without limitation, for machine control, machine locomotion, machine driving, synthetic data generation, generative AI, model training, perception, augmented reality, virtual reality, mixed reality, robotics, security and surveillance, simulation and digital twinning, autonomous or semi-autonomous machine applications, deep learning, environment simulation, object or actor simulation and/or digital twinning, data center processing, conversational AI, light transport simulation (e.g., ray-tracing, path tracing, etc.), collaborative content creation for 3D assets, cloud computing and/or any other suitable applications.")…
…the trained generative AI model … the vehicle towing the trailer (see Zhang at least [0096] "The systems and methods described herein may be used by, without limitation, non-autonomous vehicles, semi-autonomous vehicles (e.g., in one or more adaptive driver assistance systems (ADAS)), piloted and un-piloted robots or robotic platforms, warehouse vehicles, off-road vehicles, vehicles coupled to one or more trailers, flying vessels, boats, shuttles, emergency response vehicles, motorcycles, electric or motorized bicycles, aircraft, construction vehicles, underwater craft, drones, and/or other vehicle types. Further, the systems and methods described herein may be used for a variety of purposes, by way of example and without limitation, for machine control, machine locomotion, machine driving, synthetic data generation, generative AI, model training, perception, augmented reality, virtual reality, mixed reality, robotics, security and surveillance, simulation and digital twinning, autonomous or semi-autonomous machine applications, deep learning, environment simulation, object or actor simulation and/or digital twinning, data center processing, conversational AI, light transport simulation (e.g., ray-tracing, path tracing, etc.), collaborative content creation for 3D assets, cloud computing and/or any other suitable applications.")…
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the vehicle controls as disclosed by Jiang with a generative AI model such as taught by Zhang with a reasonable expectation of success for the sake of refined controls (see Zhang at least [0002]-[0005]).
Regarding claim 3, Jiang in view of Zhang teach the computer-implemented method of claim 1, wherein the associated user interface or user device comprises a display screen (see Jiang at least [0026] "...User interface system 113 may be part of peripheral devices implemented within vehicle 101 including, for example, a keyboard, a touch screen display device, a microphone, and a speaker, etc."), and the driving assistance, directions, instructions, and/or indicators comprise visuals, graphics, holograms, text, and/or textual directions or instructions that are displayed upon the display screen or projected onto a surface or window (see Zhang at least [0104] "...For example, the HMI display 534 may display information about the presence of one or more objects (e.g., a street sign, caution sign, traffic light changing, etc.), and/or information about driving maneuvers the vehicle has made, is making, or will make (e.g., changing lanes now, taking exit 34B in two miles, etc.).").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the vehicle controls as disclosed by Jiang with a generative AI model such as taught by Zhang with a reasonable expectation of success for reasons similar to those provided above in claim 1.
Regarding claim 5, Jiang in view of Zhang teach the computer-implemented method of claim 1, wherein:
the sensor data comprises at least one of audio data, image data, GPS data, vehicle telematics data, or lane marker data (see Jiang at least [0023]-[0024] "Referring now to FIG. 2, in one embodiment, sensor system 115 includes, but it is not limited to, one or more cameras 211, global positioning system (GPS) unit 212, inertial measurement unit (IMU) 213, radar unit 214, and a light detection and range (LIDAR) unit 215... Sensor system 115 may further include other sensors, such as, a sonar sensor, an infrared sensor, a steering sensor, a throttle sensor, a braking sensor, and an audio sensor (e.g., microphone)..." and [0035] "Based on the sensor data provided by sensor system 115 and localization information obtained by localization module 301, a perception of the surrounding environment is determined by perception module 302. The perception information may represent what an ordinary driver would perceive surrounding a vehicle in which the driver is driving. The perception can include the lane configuration, traffic light signals, a relative position of another vehicle, a pedestrian, a building, crosswalk, or other traffic related signs (e.g., stop signs, yield signs), etc., for example, in a form of an object. The lane configuration includes information describing a lane or lanes, such as, for example, a shape of the lane (e.g., straight or curvature), a width of the lane, how many lanes in a road, one-way or two-way lane, merging or splitting lanes, exiting lane, etc."), and
the sensor data is generated and/or collected from one or more sources, including (i) the vehicle (see Jiang at least [0020]) towing the trailer; (ii) the trailer; (iii) other vehicles on a road via vehicle-to-vehicle (V2V) wireless communication (see Jiang at least [0019] and [0030]); (iv) smart infrastructure data; (v) aerial devices; (vi) mobile devices of the driver or passengers within the vehicle or other nearby vehicles; and/or (vii) other sources.
Regarding claim 6, Jiang in view of Zhang teach the computer-implemented method of claim 5, wherein the sensor data is collected while the driver is driving the vehicle and while the vehicle is towing the trailer to provide the generative AI model (see Zhang at least [0096] "The systems and methods described herein may be used by, without limitation, non-autonomous vehicles, semi-autonomous vehicles (e.g., in one or more adaptive driver assistance systems (ADAS)), piloted and un-piloted robots or robotic platforms, warehouse vehicles, off-road vehicles, vehicles coupled to one or more trailers, flying vessels, boats, shuttles, emergency response vehicles, motorcycles, electric or motorized bicycles, aircraft, construction vehicles, underwater craft, drones, and/or other vehicle types. Further, the systems and methods described herein may be used for a variety of purposes, by way of example and without limitation, for machine control, machine locomotion, machine driving, synthetic data generation, generative AI, model training, perception, augmented reality, virtual reality, mixed reality, robotics, security and surveillance, simulation and digital twinning, autonomous or semi-autonomous machine applications, deep learning, environment simulation, object or actor simulation and/or digital twinning, data center processing, conversational AI, light transport simulation (e.g., ray-tracing, path tracing, etc.), collaborative content creation for 3D assets, cloud computing and/or any other suitable applications." and [0186] "AEB systems detect an impending forward collision with another vehicle or other object, and may automatically apply the brakes if the driver does not take corrective action within a specified time or distance parameter. AEB systems may use front-facing camera(s) and/or RADAR sensor(s) 560, coupled to a dedicated processor, DSP, FPGA, and/or ASIC. When the AEB system detects a hazard, it typically first alerts the driver to take corrective action to avoid the collision and, if the driver does not take corrective action, the AEB system may automatically apply the brakes in an effort to prevent, or at least mitigate, the impact of the predicted collision. AEB systems, may include techniques such as dynamic brake support and/or crash imminent braking.") with data that reflects how the vehicle handles while (i) towing the trailer (see Zhang at least [0096]), and (ii) being driven by that particular driver (see Jiang at least [0029] "...Based on the real-time traffic information, MPOI information, and location information, as well as real-time local environment data detected or sensed by sensor system 115 (e.g., obstacles, objects, nearby vehicles), perception and planning system 110 can plan an optimal route and drive vehicle 101, for example, via control system 111, according to the planned route to reach the specified destination safely and efficiently.").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the vehicle controls as disclosed by Jiang with a generative AI model such as taught by Zhang with a reasonable expectation of success for reasons similar to those provided above in claim 1.
Regarding claim 7, Jiang in view of Zhang teach the computer-implemented method of claim 1, the method further comprising:
inputting GPS, road and/or route data into the generative AI model (see Zhang at least [0096]) trained to provide the driving assistance, directions, instructions, and/or indicators for the vehicle towing the trailer (see Zhang at least [0096]) based upon the GPS, road and/or route data in addition to the sensor data (see Jiang at least [0030]-[0031] "...Driving statistics 123 include information indicating the driving commands (e.g., throttle, brake, steering commands) issued and responses of the vehicles (e.g., speeds, accelerations, decelerations, directions) captured by sensors of the vehicles at different points in time. Driving statistics 123 may further include information describing the driving environments at different points in time, such as, for example, routes (including starting and destination locations), MPOIs, road conditions, weather conditions, etc. Based on driving statistics 123, machine learning engine 122 generates or trains a set of rules, algorithms, and/or predictive models 124 for a variety of purposes. In one embodiment, algorithms 124 may include algorithms used by a model predictive controller of the present disclosure. Algorithms 124 can then be uploaded on ADVs to be utilized during autonomous driving in real-time."); and
generating, via the trained generative AI model (see Zhang at least [0096]), the driving assistance, directions, instructions, and/or indicators for the vehicle towing the trailer (see Zhang at least [0096]) based upon the GPS, road and/or route data in addition to the sensor data (see Jiang at least [0030]-[0031] "...Driving statistics 123 include information indicating the driving commands (e.g., throttle, brake, steering commands) issued and responses of the vehicles (e.g., speeds, accelerations, decelerations, directions) captured by sensors of the vehicles at different points in time. Driving statistics 123 may further include information describing the driving environments at different points in time, such as, for example, routes (including starting and destination locations), MPOIs, road conditions, weather conditions, etc. Based on driving statistics 123, machine learning engine 122 generates or trains a set of rules, algorithms, and/or predictive models 124 for a variety of purposes. In one embodiment, algorithms 124 may include algorithms used by a model predictive controller of the present disclosure. Algorithms 124 can then be uploaded on ADVs to be utilized during autonomous driving in real-time.").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the vehicle controls as disclosed by Jiang with a generative AI model such as taught by Zhang with a reasonable expectation of success for reasons similar to those provided above in claim 1.
Regarding claim 8, Jiang in view of Zhang teach the computer-implemented method of claim 1, the method further comprising:
building or generating, via one or more processors, a virtual travel environment surrounding the vehicle based upon the sensor data received, collected, or generated (see Jiang at least [0055] "Further, although not required, simulation of the dynamic model can be performed in a virtual environment that can include objects and structures that are currently sensed around the ADV, to account for the ADV's current environment when generating control commands. The virtual environment can include a two-dimensional or three-dimensional representation of a current environment around the ADV. Although simplified, this environment can include geometry that defines boundaries of objects (e.g., pedestrians, vehicles, structures), as well as road boundaries. This virtual environment can be generated based on sensed data from sensor system 115, and/or information from map and route information 311, localization module 301, and other modules from perception and planning system 110.");
inputting the virtual travel environment into the generative AI model (see Zhang at least [0096]) trained to provide the driving assistance, directions, instructions, and/or indicators for the vehicle towing the trailer based upon the virtual travel environment in addition to the sensor data (see Jiang at least [0055] "Further, although not required, simulation of the dynamic model can be performed in a virtual environment that can include objects and structures that are currently sensed around the ADV, to account for the ADV's current environment when generating control commands. The virtual environment can include a two-dimensional or three-dimensional representation of a current environment around the ADV. Although simplified, this environment can include geometry that defines boundaries of objects (e.g., pedestrians, vehicles, structures), as well as road boundaries. This virtual environment can be generated based on sensed data from sensor system 115, and/or information from map and route information 311, localization module 301, and other modules from perception and planning system 110."); and
generating and presenting, via the trained generative AI model, the driving assistance, directions, instructions, and/or indicators for the vehicle towing the trailer based upon the virtual travel environment in addition to the sensor data (see Zhang at least [0096]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the vehicle controls as disclosed by Jiang with a generative AI model such as taught by Zhang with a reasonable expectation of success for reasons similar to those provided above in claim 1.
Regarding claim 9, Jiang in view of Zhang teach the computer-implemented method of claim 8, wherein the virtual travel environment includes direction, heading, GPS location, lane of travel, and speed of travel of other vehicles in a vicinity of and/or surrounding the vehicle (see Jiang at least [0055] "Further, although not required, simulation of the dynamic model can be performed in a virtual environment that can include objects and structures that are currently sensed around the ADV, to account for the ADV's current environment when generating control commands. The virtual environment can include a two-dimensional or three-dimensional representation of a current environment around the ADV. Although simplified, this environment can include geometry that defines boundaries of objects (e.g., pedestrians, vehicles, structures), as well as road boundaries. This virtual environment can be generated based on sensed data from sensor system 115, and/or information from map and route information 311, localization module 301, and other modules from perception and planning system 110.") towing the trailer (see Zhang at least [0096]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the vehicle controls as disclosed by Jiang with a generative AI model such as taught by Zhang with a reasonable expectation of success for reasons similar to those provided above in claim 1.
Regarding claim 10, Jiang in view of Zhang teach the computer-implemented method of claim 1, wherein the generative AI model (see Zhang at least [0096] and [0186]) is trained to know how that particular vehicle travels or handles with that particular trailer while traveling on a road (see Jiang at least [0029] "...Based on the real-time traffic information, MPOI information, and location information, as well as real-time local environment data detected or sensed by sensor system 115 (e.g., obstacles, objects, nearby vehicles), perception and planning system 110 can plan an optimal route and drive vehicle 101, for example, via control system 111, according to the planned route to reach the specified destination safely and efficiently.").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the vehicle controls as disclosed by Jiang with a generative AI model such as taught by Zhang with a reasonable expectation of success for reasons similar to those provided above in claim 1.
Regarding claim 11, Jiang in view of Zhang teach the computer-implemented method of claim 1, wherein the driving assistance, directions, instructions, and/or indicators provide steering directions of when to turn and how much to turn a steering wheel of the vehicle to facilitate keeping the vehicle in a correct lane while turning (see Jiang at least [0042] "In one embodiment, the planning phase is performed in a number of planning cycles, also referred to as driving cycles, such as, for example, in every time interval of 100 milliseconds (ms). For each of the planning cycles or driving cycles, one or more control commands will be issued based on the planning and control data. That is, for every 100 ms, planning module 305 plans a next route segment or path segment, for example, including a target position and the time required for the ADV to reach the target position. Alternatively, planning module 305 may further specify the specific speed, direction, and/or steering angle, etc. In one embodiment, planning module 305 plans a route segment or path segment for the next predetermined period of time such as 5 seconds. For each planning cycle, planning module 305 plans a target position for the current cycle (e.g., next 5 seconds) based on a target position planned in a previous cycle. Control module 306 then generates one or more control commands (e.g., throttle, brake, steering control commands) based on the planning and control data of the current cycle." and [0064] "The MPC module 602 can include a vehicle model 670 and a cost function 674. The cost function can include cost terms 678, and associated weights 676. The MPC module can generate a sequence of future commands 672 (e.g., throttle, brake, and steering) that will predictively effect movement of the vehicle model such that the vehicle model tracks the reference, while minimizing the cost function.").
Regarding claim 12, Jiang in view of Zhang teach the analogous material of that in claim 1 as recited in the instant claim and is rejected for similar reasons. Additionally, Jiang discloses the following:
…a computer system configured to provide driving assistance and/or driving instructions (see Jiang at least Abs and [0016]), the computer system comprising…
…one or more local or remote processors (see Jiang at least [0027]), servers (see Jiang at least [0026]), transceivers, sensors, cameras, memory units (see Jiang at least [0027]), mobile devices, wearables, smart watches, smart contact lenses, smart glasses, augmented reality (AR) glasses, virtual reality (VR) headsets, mixed reality (MR) or extended reality glasses or headsets, voice bots or chatbots, ChatGPT or ChatGPT-based bots, and/or other electronic or electrical components, which may be in wired or wireless communication with one another, wherein…
…one or more processors and/or associated transceivers are configured to…
…(i) receive, collect, or generate vehicle sensor data from sensors mounted on a vehicle and/or a trailer being towed by the vehicle (see Jiang at least [0024] "Sensor system 115 may further include other sensors, such as, a sonar sensor, an infrared sensor, a steering sensor, a throttle sensor, a braking sensor, and an audio sensor (e.g., microphone). An audio sensor may be configured to capture sound from the environment surrounding the autonomous vehicle. A steering sensor may be configured to sense the steering angle of a steering wheel, wheels of the vehicle, or a combination thereof. A throttle sensor and a braking sensor sense the throttle position and braking position of the vehicle, respectively. In some situations, a throttle sensor and a braking sensor may be integrated as an integrated throttle/braking sensor.")…
…(ii) receive external sensor data from one or more external sources via wireless communication over one or more radio frequency links (see Jiang at least [0019] "FIG. 1 is a block diagram illustrating an autonomous vehicle network configuration according to one embodiment of the disclosure. Referring to FIG. 1, network configuration 100 includes autonomous vehicle 101 that may be communicatively coupled to one or more servers 103-104 over a network 102. Although there is one autonomous vehicle shown, multiple autonomous vehicles can be coupled to each other and/or coupled to servers 103-104 over network 102. Network 102 may be any type of networks such as a local area network (LAN), a wide area network (WAN) such as the Internet, a cellular network, a satellite network, or a combination thereof, wired or wireless. Server(s) 103-104 may be any kind of servers or a cluster of servers, such as Web or cloud servers, application servers, backend servers, or a combination thereof. Servers 103-104 may be data analytics servers, content servers, traffic information servers, map and point of interest (MPOI) servers, or location servers, etc." and [0072] "As discussed, learning based MPC module can adjust dynamically (e.g., while the ADV is driving) to account for real-time environmental conditions of the ADV. These environmental conditions can be gathered from servers 104 and 103, as well as localization module 301, map and route information 311, sensor system 115, and other modules. After adjusting to the environment, and accounting for physical attributes of the ADV, the MPC module can generate and optimized control command (e.g., throttle, steering, and/or brake) to be communicated to the control system 111.")…
Regarding claim 14, Jiang in view of Zhang teach the analogous material of that in claim 3 as recited in the instant claim and is rejected for similar reasons.
Regarding claim 16, Jiang in view of Zhang teach the analogous material of that in claim 5 as recited in the instant claim and is rejected for similar reasons.
Regarding claim 17, Jiang in view of Zhang teach the analogous material of that in claim 7 as recited in the instant claim and is rejected for similar reasons.
Regarding claim 18, Jiang in view of Zhang teach the analogous material of that in claim 8 as recited in the instant claim and is rejected for similar reasons.
Regarding claim 19, Jiang in view of Zhang teach the analogous material of that in claim 10 as recited in the instant claim and is rejected for similar reasons.
Regarding claim 20, Jiang in view of Zhang teach the analogous material of that in claim 11 as recited in the instant claim and is rejected for similar reasons.
Claims 2, 4, 13, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Jiang in view of Zhang, and further in view of Carbune et al. (US-2023/0143177; hereinafter Carbune).
Regarding claim 2, Jiang in view of Zhang teach the computer-implemented method of claim 1. While Jiang discloses a user interface with a microphone and speaker, neither Jiang nor Zhang explicitly disclose or teach the following:
…the associated user interface or user device comprises a chatbot or voice bot, and the driving assistance, directions, instructions, and/or indicators comprise audible or verbal directions or instructions.
Carbune, in the same field of endeavor, teaches the following:
…the associated user interface or user device comprises a chatbot or voice bot, and the driving assistance, directions, instructions, and/or indicators comprise audible or verbal directions or instructions (see Carbune at least [0001] "Humans may engage in human-to-computer dialogs with interactive software applications referred to herein as “automated assistants” (also referred to as “chatbots,” “interactive personal assistants,” “intelligent personal assistants,” “personal voice assistants,” “conversational agents,” etc.). For example, humans (which when they interact with automated assistants may be referred to as “users”) may provide spoken natural language input (i.e., spoken utterances) to an automated assistant, which may in some cases be converted into text and then processed, and/or by providing textual (e.g., typed) natural language input. An automated assistant generally responds to the spoken utterances by providing responsive user interface output (e.g., audible and/or visual user interface output), controlling smart device(s), and/or performing other action(s)." and [0026]-[0027] "The client device 110 may be, for example, one or more of: a desktop computer, a laptop computer, a tablet, a mobile phone, a computing device of a vehicle (e.g., an in-vehicle communications system, an in-vehicle entertainment system, an in-vehicle navigation system), a standalone interactive speaker (optionally having a display), a smart appliance such as a smart television, and/or a wearable apparatus of the user that includes a computing device (e.g., a watch of the user having a computing device, glasses of the user having a computing device, a virtual or augmented reality computing device). Additional and/or alternative client devices may be provided. The client device 110 can execute an automated assistant client 114. An instance of the automated assistant client 114 can be an application that is separate from an operating system of the client device 110 (e.g., installed “on top” of the operating system) - or can alternatively be implemented directly by the operating system of the client device 110. The automated assistant client 114 can interact with the warm word system 180 implemented locally at the client device 110 or via one or more of the networks 199 as depicted in FIG. 1. The automated assistant client 114 (and optionally by way of its interactions with other remote system (e.g., server(s))) may form what appears to be, from a user’s perspective, a logical instance of an automated assistant 115 with which the user may engage in a human-to-computer dialog. An instance of the automated assistant 115 is depicted in FIG. 1, and is encompassed by a dashed line that includes the automated assistant client 114 of the client device 110 and the warm word system 180. It thus should be understood that a user that engages with the automated assistant client 114 executing on the client device 110 may, in effect, engage with his or her own logical instance of the automated assistant 115 (or a logical instance of the automated assistant 115 that is shared amongst a household or other group of users). For the sake of brevity and simplicity, the automated assistant 115 as used herein will refer to the automated assistant client 114 executing on the client device 110 and/or one or more servers that may implement the warm word system 180.").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the user interface as disclosed by Jiang with a chatbot such as taught by Carbune with a reasonable expectation of success so as to provide responsive audible assistance to a user’s input (see Carbune at least [0001]).
Regarding claim 4, Jiang in view of Zhang teach the computer-implemented method of claim 1. While Zhang teaches the use of augmented reality, neither Jiang nor Zhang explicitly disclose or teach the following:
…the associated user interface or user device comprises Augmented Reality (AR) glasses, and the driving assistance, directions, instructions, and/or indicators comprise visuals, graphics, icons, text, and/or textual directions, instructions, and/or indicators that are displayed via the AR glasses as an overlay over actual images of an environment.
Carbune, in the same field of endeavor, teaches the following:
…the associated user interface or user device comprises Augmented Reality (AR) glasses, and the driving assistance, directions, instructions, and/or indicators comprise visuals, graphics, icons, text, and/or textual directions, instructions, and/or indicators that are displayed via the AR glasses as an overlay over actual images of an environment (see Carbune at least [0026]-[0027] "The client device 110 may be, for example, one or more of: a desktop computer, a laptop computer, a tablet, a mobile phone, a computing device of a vehicle (e.g., an in-vehicle communications system, an in-vehicle entertainment system, an in-vehicle navigation system), a standalone interactive speaker (optionally having a display), a smart appliance such as a smart television, and/or a wearable apparatus of the user that includes a computing device (e.g., a watch of the user having a computing device, glasses of the user having a computing device, a virtual or augmented reality computing device). Additional and/or alternative client devices may be provided. The client device 110 can execute an automated assistant client 114. An instance of the automated assistant client 114 can be an application that is separate from an operating system of the client device 110 (e.g., installed “on top” of the operating system) - or can alternatively be implemented directly by the operating system of the client device 110. The automated assistant client 114 can interact with the warm word system 180 implemented locally at the client device 110 or via one or more of the networks 199 as depicted in FIG. 1. The automated assistant client 114 (and optionally by way of its interactions with other remote system (e.g., server(s))) may form what appears to be, from a user’s perspective, a logical instance of an automated assistant 115 with which the user may engage in a human-to-computer dialog. An instance of the automated assistant 115 is depicted in FIG. 1, and is encompassed by a dashed line that includes the automated assistant client 114 of the client device 110 and the warm word system 180. It thus should be understood that a user that engages with the automated assistant client 114 executing on the client device 110 may, in effect, engage with his or her own logical instance of the automated assistant 115 (or a logical instance of the automated assistant 115 that is shared amongst a household or other group of users). For the sake of brevity and simplicity, the automated assistant 115 as used herein will refer to the automated assistant client 114 executing on the client device 110 and/or one or more servers that may implement the warm word system 180.").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the user interface as disclosed by Jiang with a AR glasses such as taught by Carbune with a reasonable expectation of success so as to provide interactive assistance to a user’s input (see Carbune at least [0001]).
Regarding claim 13, Jiang in view of Zhang and Carbune teach the analogous material of that in claim 2 as recited in the instant claim and is rejected for similar reasons.
Regarding claim 15, Jiang in view of Zhang and Carbune teach the analogous material of that in claim 4 as recited in the instant claim and is rejected for similar reasons.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Yagyu et al. (US-2022/0107201) teaches a vehicle with a control device capable of providing information about a steering wheel angle.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN REIDY whose telephone number is (571) 272-7660. The examiner can normally be reached on M-F 7:00 AM- 3:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Flynn can be reached on (571) 272-9855. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.P.R./Examiner, Art Unit 3663
/ABBY J FLYNN/Supervisory Patent Examiner, Art Unit 3663