Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is in response to communications filed on 9/18/2024. Accordingly, claims 1- 20 are pending.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1- 20—in particular independent claims 1 and 16—are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The following language in the claims below are not clearly understood:
As per claims 1 and 16: they both recite the limitation of partially obstructed, it is unclear what distinguishes a partially obstructed from obstructed (i.e., what is a partially obstructed field of view vs. what is an obstructed view). For purposes of examination the examiner will interpret this to be any level of obstruction (i.e., 0-100% obstruction).
As per claims 2-15 and 17-20: these claims depend from independent claims 1 and 16 and as such are therefore rejected for having the same deficiencies as those presented above with respect to claims 1 and 16.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1- 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Fischer et al. (US2009/0079839A1).
Fischer discloses:
1: A method comprising:
receiving, at a computing system and from a vehicle sensor coupled to a vehicle, sensor data representing an environment of the vehicle (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D and Abstract and Summary and ¶ 57-63 & 80-81; convoy transmitting signals from vehicle sensors indicating status of environment/terrain);
determining, based on the sensor data, a field of view of the environment is at least partially obstructed (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D and Abstract and Summary and ¶ 57-63 & 80-81; obstacles detected by vehicle sensors);
determining, by the computing system, a first navigation option and a second navigation option for the vehicle to perform based on determining the field of view of the environment is at least partially obstructed (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D and Abstract and Summary and ¶ 57-63 & 80-81; vehicle control center, operator to control vehicles manual and/or autonomous mode, maneuvering vehicles multiple directions);
sending, by the computing system, a request for assistance to a remote computing device, wherein the request includes sensor data showing that the field of view is at least partially obstructed and data representing the first navigation option and the second navigation option (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D and Abstract and Summary and ¶ 57-63 & 80-81; vehicle control center, operator to control vehicles manual and/or autonomous mode, maneuvering vehicles multiple directions);
receiving, at the computing system and from the remote computing device, a response specifying an instruction for the vehicle to perform (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D and Abstract and Summary and ¶ 57-63 & 80-81; vehicle control center, operator to control vehicles manual and/or autonomous mode, maneuvering vehicles multiple directions); and
controlling the vehicle based on the instruction (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D and Abstract and Summary and ¶ 57-63 & 80-81; vehicle control center, operator to control vehicles manual and/or autonomous mode, maneuvering vehicles multiple directions).
2: wherein receiving sensor data representing the environment of the vehicle comprises: receiving video data from a camera coupled to the vehicle; and wherein sending the request for assistance to the remote computing device comprises: transmitting the video data to the remote computing device, wherein the remote computing device is configured to display the video data (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D, 13 and Abstract and Summary and ¶ 57-63, 80-81 & 100; receiving video data through video cameras, LIDAR and/or radar systems).
3: wherein receiving sensor data representing the environment of the vehicle comprises: receiving radar data from a radar coupled to the vehicle and LIDAR data coupled to the vehicle (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D, 13 and Abstract and Summary and ¶ 57-63, 80-81 & 100; receiving video data through video cameras, LIDAR and/or radar systems).
4: wherein determining the field of view of the environment is at least partially obstructed comprises: receiving a warning signal from a sensor system of the vehicle; and determining the field of view of the environment is at least partially obstructed responsive to receiving the warning signal from the sensor system (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D, 13 and Abstract and Summary and ¶ 55-63, 80-81 & 100; detecting various obstacles along route/path).
5: wherein receiving the response specifying the instruction for the vehicle to perform comprises: receiving instructions to perform a navigation option with a higher level of caution; and wherein controlling the vehicle based on the instruction comprises: controlling the vehicle according to the higher level of caution (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D, 13 and Abstract and Summary and ¶ 55-63, 80-81 & 100; approaching detected obstacles cautiously—caution screen).
6: wherein sending the request comprises: sending one or more timestamps indicative of when the sensor data was collected (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D, 13 and Abstract and Summary and ¶ 55-63, 80-81, 100 & 146-148; real-time path planning).
7: further comprising: generating one or more questions based on determining the field of view of the environment is at least partially obstructed; and sending the one or more questions with the request (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D, 13 and Abstract and Summary and ¶ 55-63, 80-81, 100 & 146-148; detecting various obstacles along route/path, multiple operators coordinating complex tactical actions).
8: wherein receiving the response specifying the instruction for the vehicle to perform comprises: receiving, from the remote computing device, a given response that addresses the one or more questions (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D, 13 and Abstract and Summary and ¶ 55-63, 80-81, 100 & 146-148; detecting various obstacles along route/path, multiple operators coordinating complex tactical actions).
9: wherein the remote computing device is configured to generate the given response based on input from a human operator (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D, 13 and Abstract and Summary and ¶ 55-63, 80-81, 100 & 146-148; detecting various obstacles along route/path, multiple operators coordinating complex tactical actions).
10: further comprising: determining that a confidence metric associated with the vehicle performing the first navigation option and the second navigation option is below a predetermined threshold; and based on determining that the confidence metric is below the predetermined threshold, sending the request for assistance to the remote computing device(see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D, 13 and Abstract and Summary and ¶ 55-63, 80-81, 100 & 146-148; detecting various obstacles along route/path, multiple operators coordinating complex tactical actions—caution screen).
11: wherein determining the first navigation option and the second navigation option for the vehicle to perform comprises: determining the first navigation option to involve remaining at a current position of the vehicle and the second navigation option to involve passing an obstacle obstructing the field of view of the environment(see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D, 13 and Abstract and Summary and ¶ 55-63, 80-81, 100 & 146-148; detecting various obstacles along route/path, multiple operators coordinating complex tactical actions).
12: wherein receiving the response specifying the instruction for the vehicle to perform comprises: receiving the response further specifying a particular range of speed for the vehicle to perform a given navigation option (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D, 13 and Abstract and Summary and ¶ 55-63, 80-81, 100 & 146-148; detecting various obstacles along route/path, multiple operators coordinating complex tactical actions—real time route planning).
13: wherein controlling the vehicle based on the instruction comprises: controlling the vehicle according to the particular range of speed (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D, 13 and Abstract and Summary and ¶ 55-63, 80-81, 100 & 146-148; predetermined performance values, relative speed of vehicles).
14: wherein the instruction indicates for the vehicle to perform the first navigation option (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D, 13 and Abstract and Summary and ¶ 55-63, 80-81, 100 & 146-148; detecting various obstacles along route/path, multiple operators coordinating complex tactical actions—real time route planning).
15: wherein the instruction indicates for the vehicle to perform a given navigation option that differs from the first navigation option and the second navigation option, and wherein the given navigation option is determined by a human operator (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D, 13 and Abstract and Summary and ¶ 55-63, 80-81, 100 & 146-148; detecting various obstacles along route/path, multiple operators coordinating complex tactical actions—real time route planning).
16: A system comprising:
a vehicle sensor coupled to a vehicle (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D and Abstract and Summary and ¶ 57-63 & 80-81; convoy transmitting signals from vehicle sensors indicating status of environment/terrain);
and a computing system coupled to the vehicle, wherein the computing system is configured to: receive, from the vehicle sensor, sensor data representing an environment of the vehicle (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D and Abstract and Summary and ¶ 57-63 & 80-81; obstacles detected by vehicle sensors);
determine, based on the sensor data, a field of view of the environment is at least partially obstructed (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D and Abstract and Summary and ¶ 57-63 & 80-81; vehicle control center, operator to control vehicles manual and/or autonomous mode, maneuvering vehicles multiple directions);
determine a first navigation option and a second navigation option for the vehicle to perform based on determining the field of view of the environment is at least partially obstructed (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D and Abstract and Summary and ¶ 57-63 & 80-81; vehicle control center, operator to control vehicles manual and/or autonomous mode, maneuvering vehicles multiple directions);
send a request for assistance to a remote computing device, wherein the request includes sensor data showing that the field of view is at least partially obstructed and data representing the first navigation option and the second navigation option (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D and Abstract and Summary and ¶ 57-63 & 80-81; vehicle control center, operator to control vehicles manual and/or autonomous mode, maneuvering vehicles multiple directions);
receive, from the remote computing device, a response specifying an instruction for the vehicle to perform (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D and Abstract and Summary and ¶ 57-63 & 80-81; vehicle control center, operator to control vehicles manual and/or autonomous mode, maneuvering vehicles multiple directions);
and control the vehicle based on the instruction (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D and Abstract and Summary and ¶ 57-63 & 80-81; vehicle control center, operator to control vehicles manual and/or autonomous mode, maneuvering vehicles multiple directions).
17: wherein the vehicle sensor comprises at least one of a radar, a LIDAR, and a camera (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D, 13 and Abstract and Summary and ¶ 57-63, 80-81 & 100; receiving video data through video cameras, LIDAR and/or radar systems).
18: wherein the computing system is further configured to: receive a warning signal from one or more sensor systems of the vehicle; and responsive to receiving the warning signal, determine the field of view of the environment is at least partially obstructed (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D, 13 and Abstract and Summary and ¶ 55-63, 80-81 & 100; detecting various obstacles along route/path).
19: wherein the instruction received from the remote computing device specifies a level of caution, and wherein the computing system is further configured to control the vehicle according to the level of caution specified in the instruction (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D, 13 and Abstract and Summary and ¶ 55-63, 80-81 & 100; approaching detected obstacles cautiously—caution screen).
20: wherein the computing system is configured to provide a live video stream to the remote computing device, and wherein the remote computing device is configured to display the live video stream (see Fischer at least fig.1-26 and in particular fig. 1, 6D-8D, 13 and Abstract and Summary and ¶ 55-63, 80-81, 100 & 146-148; detecting various obstacles along route/path, multiple operators coordinating complex tactical actions—real time route planning).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MACEEH ANWARI whose telephone number is 571-272-7591. The examiner can normally be reached on 9-9:30.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Angela Ortiz can be reached on 571-272-1206. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
MACEEH . ANWARI
Primary Examiner
Art Unit 3663
/MACEEH ANWARI/ Primary Examiner, Art Unit 3663