DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see Remarks, filed 12/15/2025, with respect to claims 14-25 have been fully considered and are persuasive. The rejection of claims 14-25 has been withdrawn.
Although also an independent claim, claim 26 was not similarly modified nor addressed. (Several attempts were made to contact the applicant but failed. Applicant’s return calls may have been mislabeled as spam.) Hence, the 103 rejection of claim 26 remains extant, as outlined below.
Claim Objections
Claim 16 is objected to because of the following informalities: claim 16 starts off with “A The method for outputting…” which should be replaced by “A method for outputting….”.
Appropriate correction is required.
Author on PTO 892 Form
It is noted that the previous argument for claim 26 referred to Gammer as the first author of DE 102018212056. This was incorrect and should have referred to Dollinger, under whose name the reference was listed on the PTO 892 form sent out on 5/9/2025. (Gammer and Dollinger are both authors of the publication DE 102018212056, as can be seen by looking at the attached copy sent out on 5/9/2025).
The argument for claim 26 has been provided again, rewritten with Dollinger as the first author.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 26 is rejected under 35 U.S.C. 103 as being unpatentable over DE 102016206694 (Lachmund) in light of DE 102018212056 (Dollinger et al., hence Dollinger).
As for claim 26, Lachmund teaches a vehicle operating fully autonomously (Fig. 1, automobile (1), note that nothing in the description has an individual in the vehicle acting like a driver, rather than a passenger making motions. Hence although this is listed as being for “a highly automated vehicle” the invention would work the same for a vehicle operating fully autonomously) , comprising:
a computing unit configured to output to a road user [warning signal] from a vehicle operating fully autonomously, ("Evaluation device (20)" and "transmission device (30)" shown in Fig. 1) the computing unit configured to:
capture a gesture and/or an acoustic message from at least one vehicle occupant of the vehicle operating fully autonomously ("Acquiring information about a driver of the vehicle in the interior of the vehicle as driver information; Evaluating and classifying the driver information…" paragraph 2; "The evaluation device is preferably designed to determine when classifying the driver information as to whether and with which reliability one or more driver actions or driver reactions of a given driver action or reaction catalog are present. Based on a driver action and / or driver reaction catalog, in which the individual driver actions or reactions are given, on the presence of a behavior of the driver can be checked, it is possible to output clear information to other road users." (pg. 7));
detect a road user in surroundings of the vehicle (" The detected driver information of the other vehicle 101 are here as road user driver information from the perspective of the driver 2 of the vehicle 1 designated."(page 2); Fig. 1 shows the other road user (“driver” 102 in vehicle 101); “road user” being a cyclist/motorcyclist/pedestrian, page 2)
detect a viewing direction of the road user at a time of gesture capture and/or capture of the acoustic message (" further detection device 15 of the vehicle 1 also includes an outdoor camera for this purpose 16 , with the information about the road user 102 be recorded. From the evaluation device 20 the thus acquired road user information is classified against a road user action and response catalog."(pg. 2); "Evaluation device is coupled to classify the road user information by means of the evaluation device or the further evaluation device, wherein it is determined whether and with which reliability one or more road user actions and / or road user reactions of a given road user action and reaction catalog is present and classified as present road user actions and / or provide road user responses to an automated driving vehicle system as an input." (page 2); "Therefore, in such a situation, in addition to the head movement for a classification of the head shaking motion as a rejection for a suggestion gesture of another road user, for example, to drive the call first, also requires a view direction evaluation that checks whether the driver is looking at the other road user." (pg. 9.) Since the system is attempting to a) capture and classify a gesture of the autonomous vehicle “driver” as well as capture and interpret if another vehicle’s “driver” is responding to the initial gesture (and if so, what it is), the detection/interpretation needs to be carried out at the same time as the gestures are made, which in real life happens almost simultaneously, or at least close enough that a broadest reasonable interpretation of “at a time of gesture” would cover);
output to the road user the warning signal from the vehicle operating fully autonomously depending on the captured gesture of the vehicle occupant and/or the acoustic message and the viewing direction of the road user. ("This classified driver information or a code that uniquely identifies the corresponding driver action is transmitted via a transmission device 30 which a transmitting device 32 for vehicle-to-vehicle communication to the other vehicle 101 of the road user 102 transfer." (pg. 9))
acquire first sensor data relating to the gesture and/or an acoustic message of the at least one vehicle occupant of the vehicle operating fully autonomously ("Particularly advantageously, the at least one detection device comprises a camera for detecting the driver information. This can monitor the interior of the driver's seat. With a camera, most of the gestures that occur when communicating with different road users can be reliably detected. Thus, movements of the body extremities, the head, but also an attitude of the upper body can be detected. In addition, it is possible to detect the viewing direction or even a facial expression."(pg.7));
acquire second sensor data relating to the road user detected in the surroundings of the vehicle ("the vehicle comprises a further detection device for acquiring information about the at least one other road user as road user information…"(page 2));
acquire third sensor data relating to the detected viewing direction of the road user at the time of gesture capture and/or capture of the acoustic message ( "Therefore, in such a situation, in addition to the head movement for a classification of the head shaking motion as a rejection for a suggestion gesture of another road user, for example, to drive the call first, also requires a view direction evaluation that checks whether the driver is looking at the other road user."(implies existence of something to capture such data) (pg. 9) Since the system is attempting to a) capture and classify a gesture of the autonomous vehicle “driver” as well as capture and interpret if another vehicle’s “driver” is responding to the initial gesture (and if so, what it is), the detection/interpretation needs to be carried out at the same time as the gestures are made, which in real life happens almost simultaneously, or at least close enough that a broadest reasonable interpretation of “at a time of gesture” would cover);
and generate a warning signal directed at the road user from the vehicle operating fully autonomously depending on the acquired first, second and third sensor data. (since the information sent to the other vehicle (Fig. 2) requires interpretation of the gesture and if it is possible for an existing road user to see it (view direction evaluation mentioned above), all three sets of data are necessary and thus the information will “depend” on them.)
at least one first environment capture unit configured to capture the gesture and/or acoustic message from the at least one vehicle occupant of the vehicle operating fully autonomously (internal camera: "The vehicle 1 includes an assistance system 10 which is a detection device 11 for detecting the interior of the vehicle 1 , especially the driver 2 , includes. The detection device 11 preferably has a camera for this purpose 12 ," (pg. 9));
at least one second environment capture unit configured to detect the road user in the surroundings of the vehicle (exterior camera 16 (See Fig. 1));
and a signal transmitter configured to output to the road user the warning signal from the vehicle operating fully [autonomously]. ("transmission equipment" 30 (See Fig. 1))
Lachmund does not specifically teach outputting to a road user a visual or acoustic warning signal. However, this is known in the art, as is mentioned in and shown by Dollinger: “ ("It is known that a vehicle projects information (e.g. symbols, warning signs, etc.) into its surroundings in order to make it available to other road users. The information is projected in front of the vehicle in particular in the direction of travel, e.g. to warn other road users of the approaching vehicle." (pg. 2); Fig. 1 shows an autonomous vehicle sending an optical message to a road user, in this case a pedestrian.)
It would have been obvious to one of ordinary skill in the art at the time of the applicant to use an optical warning signal, as is shown by Dollinger, in the system of Lachmund with a reasonable expectation of success. The motivation would be to be able to communicate to as many types of road users as possible.
Allowable Subject Matter
Claim 14-25 are allowed.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TANYA CHRISTINE SIENKO whose telephone number is (571)272-5816. The examiner can normally be reached Mon - Fri 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kito Robinson can be reached at 571-270-3912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TANYA C SIENKO/ Examiner, Art Unit 3664
/KITO R ROBINSON/ Supervisory Patent Examiner, Art Unit 3664