DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This correspondence is in response to amendments filed on November 24, 2025. Claims 1-4, 7-15, and 18-20 are amended. Claims 5-6 and 16-17 are filed as originally presented. The amendments to claims 8 and 19 obviate the previous claim objection and as such that objection has been withdrawn. Applicant’s arguments have been addressed below.
Response to Arguments
Applicant argues that Kim in view of Hara do not teach the amended limitations of the claims (see Remarks Pages 9-10). Applicant’s arguments with respect to claims rejected as unpatentable over Kim in view of Hara have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Objections
Claim 10 is objected to because of the following informalities:
Claim 10 recites “…upon a corrective action that is an automated vehicle command … satisfies a parameter…” in lines 9-10. The verb tense for satisfies is improperly used in this limitation. Examiner recommends correcting the limitation to recite “…upon a corrective action that is an automated vehicle command … satisfying a parameter…” as was previously recited in the limitation before amendment.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-6, 8, 10-17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Kiriya (US 2015/0375741 A1) in view of Donald et al. (US 2018/0056989 A1; hereinafter “Donald”) and further in view of Iida et al. (US 2024/0317313 A1; hereinafter “Iida”).
Regarding claim 1, Kiriya teaches a detection system comprising:
a memory storing instructions that, when executed by a processor, (“The storage unit 43 stores a manual parking threshold value 43a, an automatic parking threshold value 43b, a parking adjustment threshold value 43c, and a program 43d” [0037]. “The program 43d is firmware that is read and executed to control the vehicle apparatus 4 by the control unit 41” [0042]. Thus, there is a storage unit that stores a program, i.e., instructions, that are executed by the control unit which is described in [0035] as a microcomputer, i.e., processor.) cause the processor to:
detect a gesture command for an automated action from a user outside of a vehicle using sensor data (Fig. 15 shows a process in which acceleration instructions and motion instructions are recognized and verified to determine a vehicle instruction. Fig. 13 displays a table including examples of these movement instructions, i.e., automated action, which correspond to gesture commands with specific acceleration and image motions initiated by the user. Fig. 9 shows the user providing movement instructions for the automated action outside the vehicle using acceleration data and image data from acceleration sensor and camera respectively (see also [0135-0136] regarding gesture command detection).) …;
predict an obstacle for an incomplete task of the automated action from a vehicle state using the sensor data …, wherein the obstacle causes a halt of the automated action according to a … positioning information about the vehicle (In Fig. 16, S2110 predicts an obstacle in the vicinity of the vehicle which causes the vehicle to stop regardless of the movement command which was received and corresponding movement performed, i.e., vehicle state, when an obstacle is present. This stopping of the vehicle halts the automated action, thus rendering the task incomplete. According to [0219], the prediction is based on detection signals according to clearance sonar, i.e., positioning information about the vehicle.)…
However, Kiriya is silent to teachings explicitly directed to the limitations …wherein the detection system authenticates the user prior to accepting the gesture command…
predict an obstacle for an incomplete task of the automated action from a vehicle state using the sensor data and notify a user, wherein the obstacle causes a halt of the automated action according to a global positioning information about the vehicle; and
upon a corrective response that is an automated vehicle command different than the gesture command satisfying a parameter to automatically avoid the obstacle, execute the incomplete task for the automated action by an automated driving system (ADS) moving the vehicle from a stop position.
Donald, pertinent to the problem at hand, teaches … wherein the detection system authenticates the user prior to accepting the gesture command (“Assuming the driver attention and biometric authentication steps 108, 110 have been determined positively, the method 120 proceeds to step 110. At 110, a driver gesture is determined from the video data 33 by the gesture determination module 34” [0065]. Thus, there is a step of authenticating the driver before determining the gesture command.)…
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the gesture recognition of Kiriya to include the authentication step of Donald with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification such that the risk of bystanders taking control of the vehicle is significantly reduced (Donald, [0062]).
However, Kiriya as modified by Donald still does not teach …predict an obstacle for an incomplete task of the automated action from a vehicle state using the sensor data and notify a user, wherein the obstacle causes a halt of the automated action according to a global positioning information about the vehicle; and
upon a corrective response that is an automated vehicle command different than the gesture command satisfying a parameter to automatically avoid the obstacle, execute the incomplete task for the automated action by an automated driving system (ADS) moving the vehicle from a stop position.
Iida, pertinent to the problem at hand, teaches ……predict an obstacle for an incomplete task of the automated action from a vehicle state using the sensor data and notify a user (In Fig. 10, a user is provided with a message regarding an obstacle for the incomplete task of the automated parking maneuver according to the vehicle’s movement and corresponding movement path, i.e., vehicle state. Obstacles are detected via sensor data as described in Paragraph [0157].), … according to a global positioning information about the vehicle (The vehicle action is tracked based on GPS position information of the vehicle which is acquired as the absolute position of the vehicle as it is parking (see [0157] and S101-S103 of Fig. 9).); and
upon a corrective response that is an automated vehicle command different than the gesture command satisfying a parameter to automatically avoid the obstacle, execute the incomplete task for the automated action by an automated driving system (ADS) moving the vehicle from a stop position (Fig. 9 provides S108-S110 which generates a corrected path different than the initial parking path instruction, i.e., a corrective response that is an automated vehicle command different than the initial command. Fig. 6-8 shows the corrected path which is determined to satisfy a distance threshold, i.e., an obstacle avoidance parameter, automatically. In S111 of Fig. 9, the task which was interrupted by the obstacle, i.e., incomplete, is performed by automatic travel which is implemented via the travel control module (explained in [0073]) as part of the parking assistance device (see [0060]), i.e., ADS.).
Although Iida does not explicitly describe gesture commands or moving the vehicle from a stop position, when combined with the teachings of Kiriya, one of ordinary skill would be able to combine elements such as the gesture commands and halting/stop position of Kiriya with the corrective response elements of Iida. That is, in summary, Kiriya in view of Iida would satisfy the requirements that a vehicle stops when an obstacle is presented based on sensor data (Kiriya) and additionally a tracked global position of the vehicle along its parking path (Iida), and then would initiate the corrective response which is different from the initial path (Iida), wherein the initial path is that which was generated by the gesture command (Kiriya), and then moves the vehicle from its stopped position once that correction has been determined.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the obstacle response generated after a gesture command is initiated as taught by Kiriya to include the path tracking and path correction features of Iida with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification as a combination of known elements and methods to yield predictable results (MPEP 2143.I(A)).
Regarding claim 2, Kiriya as modified by Donald and Iida teaches the detection system of claim 1.
With regard to the corrective response as modified in claim 1, Iida further teaches wherein the instructions for the corrective response to the obstacle satisfying the parameter further include instructions to:
move by the vehicle automatically one of a door position and a mirror position associated with the vehicle that avoids the obstacle (In Fig. 8, it is shown that the side of the car inclusive of door and mirror position are moved such that the vehicle avoids obstacle 30b while maintaining appropriate threshold distance between obstacle and side of car, i.e., door and mirrors, as shown by distance d3 and d4 after correction. See also [0097].),
the parameter is a safety area around the vehicle and factors the global positioning information (“For example, the detection module 106 determines that an obstacle has been detected when detecting an object, the distance of which to the vehicle 1 is equal to or smaller than a threshold when the vehicle 1 travels based on the travel path 80 recorded by teacher traveling, among unknown objects absent at the time of the teacher traveling” [0113]. “First, the acquisition module 102 acquires an absolute position, a peripheral image, a speed, a steering angle, a braking operation, and a distance measurement result of the vehicle 1 (S101). Although not illustrated in this flowchart, it is assumed that the acquisition of the absolute position, the peripheral image, the speed, the steering angle, the braking operation, and the distance measurement result of the vehicle 1 by the acquisition module 102 continues during the execution of this flowchart” [0107]. Thus, via continual acquisition of the absolute position of the vehicle and distance measurement results which factors GPS measurements (see [0051-0052] and [0109]), the corrective response requires satisfying a threshold parameter requirement, i.e., a safety area around the vehicle.).
Regarding claim 3, Kiriya as modified by Donald and Iida teaches the detection system of claim 2, with Kiriya further teaching …wherein the instructions to execute the incomplete task for the automated action further include instructions to:
navigate the vehicle within a parking area using commands from the ADS (“The instruction receiving unit 41c receives an instruction signal transmitted from the mobile device 3 by the user outside the vehicle, the instruction signal being indicative of adjusting the parking position of the vehicle 2” [0047]. “The movement control unit 41d controls the vehicle moving apparatus 5 (to be described later) such that the parking position of the vehicle 2 is adjusted” [0048]. Thus, the vehicle navigates at a parking position, i.e., within a parking area using commands from the vehicle control system, i.e., ADS.),
wherein the gesture command is associated with one of parking and unparking the vehicle (As described above, the gesture command, i.e., instruction from the mobile device according to the gesture methods described in the rejection of claim 1, is associated with parking such that commands initiate the adjustment of a parking position.).
Regarding claim 4, Kiriya as modified by Donald and Iida teaches the detection system of claim 3.
Kiriya in view of Iida further teaches wherein the instructions for the corrective response to the obstacle satisfying the parameter further include instructions to:
delay the incomplete task until the vehicle state changes (Kiriya teaches a looped feedback which receives a gesture command before controlling the vehicle based on the instruction received from the gesture. The vehicle stops, i.e., delays, the parking adjustment task as a result of the obstacle. Such obstacle delay will continue until the user either initiates an end signal (S2130 of Fig. 16) or an instruction which does not interfere with the obstacle (S2090-S2110) is received, i.e., a vehicle state regarding the performed movement is changed. Iida teaches a similar control loop in which while the vehicle is in the process of parking, there is an evaluation for an obstacle (S107 and S112 of Fig. 9). Given the halting response of Kiriya it would be obvious that the corrective response of Iida would continually delay the parking task according to an obstacle requiring a corrective response until a corrective response is received to avoid the obstacle, i.e., the vehicle path state is corrected, or the parking space has been reached, i.e., the vehicle path is complete. Thus, Kiriya and Iida both teach control loops that in combination delay the incomplete movement task of parking that has been halted by an obstacle until the vehicle determines a movement in which no obstacle exists or that the parking task is complete.), wherein the vehicle state is anticipated (Given the predetermined parking path as taught by Iida, the vehicle’s movement instruction, i.e., vehicle state, is anticipated.) and the corrective response is a hand gesture by the user that is different than the gesture command (Kiriya teaches the gesture commands as parking adjustment instructions. As such, given that Iida teaches the corrective response being that which is an automated reconfiguration of the parking path, i.e., a command which is different than the initial command, Kiriya’s support regarding a looped control for receiving another gesture for adjustment would indicate a corrective response as a gesture different from the initial gesture command which caused the obstacle interference.).
Therefore, this combination of teachings regarding Kiriya and Iida is motivated by the same combination of known methods and prior art elements that yields a reasonable expectation of success as was described in the rejection of claim 1 (see MPEP 2143.I(A)).
Regarding claim 5, Kiriya as modified by Donald and Iida teaches the detection system of claim 1.
Iida further teaches …wherein the instructions to notify the user further include instructions to:
generate an alert associated with the obstacle (“The corrected path image 121a may include a message M1 for explaining to the driver a reason for changing the travel path 80. In the example illustrated in FIG. 7, the display control module 109 displays the message M1 “Correct a path because an obstacle is present”” [0092]. Thus, a message, i.e., alert, associated with the obstacle is generated on a smartphone display.), the alert is one of flashing headlights, honking, a verbal alarm, a signal for a wireless device of the user, and a picture for the wireless device (Figure 10 shows the alert described above as one of a signal for a wireless device of the user in addition to a picture for the wireless device.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to have modified the obstacle determination system of Kiriya to include the specifics of obstacle notifications as taught by Iida with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification because notifying the user with a message and picture of the obstacle and associated correction keeps the user informed regarding any changes to the originally commanded plan for parking the vehicle and allows the user to respond to such automatic corrective actions in the event that the user wishes to issue an alternative command.
Regarding claim 6, Kiriya as modified by Donald and Iida teaches the detection system of claim 5,
with Kiriya in view of Iida further teaching wherein the instructions for the corrective response to the obstacle satisfying the parameter further include instructions to:
receive a vehicle command from the user according to the vehicle state and the alert (Kiriya teaches receiving a new gesture command during vehicle control each time an end signal is not received (see S2130 of Fig. 16). Given the combined teachings regarding the alert of an obstacle and replanning according to Iida, the user would then initiate the next command according to whether or not the obstacle exists and which gesture would not be interfered with the obstacle. Such a command, as taught by Kiriya, would be determined according to the performed movement, i.e., vehicle state, which triggered the obstacle detection.), the vehicle command is different than the gesture command (It would be obvious to one of ordinary skill in the art that in order to avoid the obstacle which was triggered by the initial gesture command, the new vehicle command would need to be different than the initial gesture command in order to avoid the same obstacle being detected again in S2110 of Fig. 16 of Kiriya once the new movement instruction is received in S2090 of Fig. 16. ).
Regarding claim 8, Kiriya as modified by Donald and Iida teaches the detection system of claim 1, with modifying teachings of Iida further demonstrating
the obstacle is one of a wall and a person within a boundary area around the vehicle (“The correction module 107 corrects the travel path 80 such that distances d3 and d5 to the trash box 30b and the other unknown object 31 are longer than a threshold while maintaining a state in which distances d1, d2, and d4 to the sidewalls 20 and 21 are longer than the threshold and generates the corrected path 81” [0097]. Thus, the sidewalls are considered as obstacles within the boundary area around the vehicle and must be accounted for when evaluating the threshold distance.); and
the obstacle is proximate to one of a door and a tailgate associated with the vehicle (As shown in Fig. 8, the sidewall and other obstacles are proximate to the door of the vehicle, as demonstrated by distances d1-d4.).
Regarding claim 10, Kiriya teaches a non-transitory computer-readable medium (“The storage unit 43 is a non-volatile memory such as a hard disk drive configured to include an electrical erasable programmable read-only memory (EEPROM), a flash memory, or a magnetic disk” [0037].) comprising:
instructions that when executed by a processor (“The storage unit 43 stores a manual parking threshold value 43a, an automatic parking threshold value 43b, a parking adjustment threshold value 43c, and a program 43d” [0037]. “The program 43d is firmware that is read and executed to control the vehicle apparatus 4 by the control unit 41” [0042]. Thus, there is a storage unit that stores a program, i.e., instructions, that are executed by the control unit which is described in [0035] as a microcomputer, i.e., processor.) cause the processor to:
detect a gesture command for an automated action from a user outside of a vehicle using sensor data (Fig. 15 shows a process in which acceleration instructions and motion instructions are recognized and verified to determine a vehicle instruction. Fig. 13 displays a table including examples of these movement instructions, i.e., automated action, which correspond to gesture commands with specific acceleration and image motions initiated by the user. Fig. 9 shows the user providing movement instructions for the automated action outside the vehicle using acceleration data and image data from acceleration sensor and camera respectively (see also [0135-0136] regarding gesture command detection).) …;
predict an obstacle for an incomplete task of the automated action from a vehicle state using the sensor data …, wherein the obstacle causes a halt of the automated action according to a … positioning information about the vehicle (In Fig. 16, S2110 predicts an obstacle in the vicinity of the vehicle which causes the vehicle to stop regardless of the movement command which was received and corresponding movement performed, i.e., vehicle state, when an obstacle is present. This stopping of the vehicle halts the automated action, thus rendering the task incomplete. According to [0219], the prediction is based on detection signals according to clearance sonar, i.e., positioning information about the vehicle.)…
However, Kiriya is silent to teachings explicitly directed to the limitations …wherein a user is authenticated prior to accepting the gesture command…
predict an obstacle for an incomplete task of the automated action from a vehicle state using the sensor data and notify a user, wherein the obstacle causes a halt of the automated action according to a global positioning information about the vehicle; and
upon a corrective response that is an automated vehicle command different than the gesture command satisfying a parameter to automatically avoid the obstacle, execute the incomplete task for the automated action by an automated driving system (ADS) moving the vehicle from a stop position.
Donald, pertinent to the problem at hand, teaches …wherein a user is authenticated prior to accepting the gesture command (“Assuming the driver attention and biometric authentication steps 108, 110 have been determined positively, the method 120 proceeds to step 110. At 110, a driver gesture is determined from the video data 33 by the gesture determination module 34” [0065]. Thus, there is a step of authenticating the driver before determining the gesture command.)…
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the gesture recognition of Kiriya to include the authentication step of Donald with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification such that the risk of bystanders taking control of the vehicle is significantly reduced (Donald, [0062]).
However, Kiriya as modified by Donald still does not teach …predict an obstacle for an incomplete task of the automated action from a vehicle state using the sensor data and notify a user, wherein the obstacle causes a halt of the automated action according to a global positioning information about the vehicle; and
upon a corrective response that is an automated vehicle command different than the gesture command satisfying a parameter to automatically avoid the obstacle, execute the incomplete task for the automated action by an automated driving system (ADS) moving the vehicle from a stop position.
Iida, pertinent to the problem at hand, teaches ……predict an obstacle for an incomplete task of the automated action from a vehicle state using the sensor data and notify a user (In Fig. 10, a user is provided with a message regarding an obstacle for the incomplete task of the automated parking maneuver according to the vehicle’s movement and corresponding movement path, i.e., vehicle state. Obstacles are detected via sensor data as described in Paragraph [0157].), … according to a global positioning information about the vehicle (The vehicle action is tracked based on GPS position information of the vehicle which is acquired as the absolute position of the vehicle as it is parking (see [0157] and S101-S103 of Fig. 9).); and
upon a corrective response that is an automated vehicle command different than the gesture command satisfying a parameter to automatically avoid the obstacle, execute the incomplete task for the automated action by an automated driving system (ADS) moving the vehicle from a stop position (Fig. 9 provides S108-S110 which generates a corrected path different than the initial parking path instruction, i.e., a corrective response that is an automated vehicle command different than the initial command. Fig. 6-8 shows the corrected path which is determined to satisfy a distance threshold, i.e., an obstacle avoidance parameter, automatically. In S111 of Fig. 9, the task which was interrupted by the obstacle, i.e., incomplete, is performed by automatic travel which is implemented via the travel control module (explained in [0073]) as part of the parking assistance device (see [0060]), i.e., ADS.).
Although Iida does not explicitly describe gesture commands or moving the vehicle from a stop position, when combined with the teachings of Kiriya, one of ordinary skill would be able to combine elements such as the gesture commands and halting/stop position of Kiriya with the corrective response elements of Iida. That is, in summary, Kiriya in view of Iida would satisfy the requirements that a vehicle stops when an obstacle is presented based on sensor data (Kiriya) and additionally a tracked global position of the vehicle along its parking path (Iida), and then would initiate the corrective response which is different from the initial path (Iida), wherein the initial path is that which was generated by the gesture command (Kiriya), and then moves the vehicle from its stopped position once that correction has been determined.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the obstacle response generated after a gesture command is initiated as taught by Kiriya to include the path tracking and path correction features of Iida with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification as a combination of known elements and methods to yield predictable results (MPEP 2143.I(A)).
Regarding claim 11, Kiriya as modified by Donald and Iida teaches the non-transitory computer-readable medium of claim 10.
With regard to the corrective response as modified in claim 10, Iida further teaches wherein the instructions for the corrective response to the obstacle satisfying the parameter further include instructions to:
move by the vehicle automatically one of a door position and a mirror position associated with the vehicle that avoids the obstacle (In Fig. 8, it is shown that the side of the car inclusive of door and mirror position are moved such that the vehicle avoids obstacle 30b while maintaining appropriate threshold distance between obstacle and side of car, i.e., door and mirrors, as shown by distance d3 and d4 after correction. See also [0097].),
the parameter is a safety area around the vehicle and factors the global positioning information (“For example, the detection module 106 determines that an obstacle has been detected when detecting an object, the distance of which to the vehicle 1 is equal to or smaller than a threshold when the vehicle 1 travels based on the travel path 80 recorded by teacher traveling, among unknown objects absent at the time of the teacher traveling” [0113]. “First, the acquisition module 102 acquires an absolute position, a peripheral image, a speed, a steering angle, a braking operation, and a distance measurement result of the vehicle 1 (S101). Although not illustrated in this flowchart, it is assumed that the acquisition of the absolute position, the peripheral image, the speed, the steering angle, the braking operation, and the distance measurement result of the vehicle 1 by the acquisition module 102 continues during the execution of this flowchart” [0107]. Thus, via continual acquisition of the absolute position of the vehicle and distance measurement results which factors GPS measurements (see [0051-0052] and [0109]), the corrective response requires satisfying a threshold parameter requirement, i.e., a safety area around the vehicle.).
Regarding claim 12, Kiriya teaches a method comprising:
detecting a gesture command for an automated action from a user outside of a vehicle using sensor data (Fig. 15 shows a process in which acceleration instructions and motion instructions are recognized and verified to determine a vehicle instruction. Fig. 13 displays a table including examples of these movement instructions, i.e., automated action, which correspond to gesture commands with specific acceleration and image motions initiated by the user. Fig. 9 shows the user providing movement instructions for the automated action outside the vehicle using acceleration data and image data from acceleration sensor and camera respectively (see also [0135-0136] regarding gesture command detection).) …;
predicting an obstacle for an incomplete task of the automated action from a vehicle state using the sensor data …, wherein the obstacle causes a halt of the automated action according to a … positioning information about the vehicle (In Fig. 16, S2110 predicts an obstacle in the vicinity of the vehicle which causes the vehicle to stop regardless of the movement command which was received and corresponding movement performed, i.e., vehicle state, when an obstacle is present. This stopping of the vehicle halts the automated action, thus rendering the task incomplete. According to [0219], the prediction is based on detection signals according to clearance sonar, i.e., positioning information about the vehicle.)…
However, Kiriya is silent to teachings explicitly directed to the limitations …… wherein the user is authenticated prior to accepting the gesture command …
predicting an obstacle for an incomplete task of the automated action from a vehicle state using the sensor data and notifying a user, wherein the obstacle causes a halt of the automated action according to a global positioning information about the vehicle; and
upon a corrective response that is an automated vehicle command different than the gesture command satisfying a parameter to automatically avoid the obstacle, execute the incomplete task for the automated action by an automated driving system (ADS) moving the vehicle from a stop position.
Donald, pertinent to the problem at hand, teaches … wherein the user is authenticated prior to accepting the gesture command (“Assuming the driver attention and biometric authentication steps 108, 110 have been determined positively, the method 120 proceeds to step 110. At 110, a driver gesture is determined from the video data 33 by the gesture determination module 34” [0065]. Thus, there is a step of authenticating the driver before determining the gesture command.)…
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the gesture recognition of Kiriya to include the authentication step of Donald with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification such that the risk of bystanders taking control of the vehicle is significantly reduced (Donald, [0062]).
However, Kiriya as modified by Donald still does not teach …predicting an obstacle for an incomplete task of the automated action from a vehicle state using the sensor data and notifying a user, wherein the obstacle causes a halt of the automated action according to a global positioning information about the vehicle; and
upon a corrective response that is an automated vehicle command different than the gesture command satisfying a parameter to automatically avoid the obstacle, execute the incomplete task for the automated action by an automated driving system (ADS) moving the vehicle from a stop position.
Iida, pertinent to the problem at hand, teaches …predicting an obstacle for an incomplete task of the automated action from a vehicle state using the sensor data and notifying a user (In Fig. 10, a user is provided with a message regarding an obstacle for the incomplete task of the automated parking maneuver according to the vehicle’s movement and corresponding movement path, i.e., vehicle state. Obstacles are detected via sensor data as described in Paragraph [0157].), … according to a global positioning information about the vehicle (The vehicle action is tracked based on GPS position information of the vehicle which is acquired as the absolute position of the vehicle as it is parking (see [0157] and S101-S103 of Fig. 9).); and
upon a corrective response that is an automated vehicle command different than the gesture command satisfying a parameter to automatically avoid the obstacle, execute the incomplete task for the automated action by an automated driving system (ADS) moving the vehicle from a stop position (Fig. 9 provides S108-S110 which generates a corrected path different than the initial parking path instruction, i.e., a corrective response that is an automated vehicle command different than the initial command. Fig. 6-8 shows the corrected path which is determined to satisfy a distance threshold, i.e., an obstacle avoidance parameter, automatically. In S111 of Fig. 9, the task which was interrupted by the obstacle, i.e., incomplete, is performed by automatic travel which is implemented via the travel control module (explained in [0073]) as part of the parking assistance device (see [0060]), i.e., ADS.).
Although Iida does not explicitly describe gesture commands or moving the vehicle from a stop position, when combined with the teachings of Kiriya, one of ordinary skill would be able to combine elements such as the gesture commands and halting/stop position of Kiriya with the corrective response elements of Iida. That is, in summary, Kiriya in view of Iida would satisfy the requirements that a vehicle stops when an obstacle is presented based on sensor data (Kiriya) and additionally a tracked global position of the vehicle along its parking path (Iida), and then would initiate the corrective response which is different from the initial path (Iida), wherein the initial path is that which was generated by the gesture command (Kiriya), and then moves the vehicle from its stopped position once that correction has been determined.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the obstacle response generated after a gesture command is initiated as taught by Kiriya to include the path tracking and path correction features of Iida with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification as a combination of known elements and methods to yield predictable results (MPEP 2143.I(A)).
Regarding claim 13, Kiriya as modified by Donald and Iida teaches the method of claim 12.
With regard to the corrective response as modified in claim 12, Iida further teaches wherein the corrective response to the obstacle satisfying the parameter further includes:
moving by the vehicle automatically one of a door position and a mirror position associated with the vehicle that avoids the obstacle (In Fig. 8, it is shown that the side of the car inclusive of door and mirror position are moved such that the vehicle avoids obstacle 30b while maintaining appropriate threshold distance between obstacle and side of car, i.e., door and mirrors, as shown by distance d3 and d4 after correction. See also [0097].),
the parameter is a safety area around the vehicle and factors the global positioning information (“For example, the detection module 106 determines that an obstacle has been detected when detecting an object, the distance of which to the vehicle 1 is equal to or smaller than a threshold when the vehicle 1 travels based on the travel path 80 recorded by teacher traveling, among unknown objects absent at the time of the teacher traveling” [0113]. “First, the acquisition module 102 acquires an absolute position, a peripheral image, a speed, a steering angle, a braking operation, and a distance measurement result of the vehicle 1 (S101). Although not illustrated in this flowchart, it is assumed that the acquisition of the absolute position, the peripheral image, the speed, the steering angle, the braking operation, and the distance measurement result of the vehicle 1 by the acquisition module 102 continues during the execution of this flowchart” [0107]. Thus, via continual acquisition of the absolute position of the vehicle and distance measurement results which factors GPS measurements (see [0051-0052] and [0109]), the corrective response requires satisfying a threshold parameter requirement, i.e., a safety area around the vehicle.).
Regarding claim 14, Kiriya as modified by Donald and Iida teaches the method of claim 13, with Kiriya further teaching …wherein executing the incomplete task for the action further includes:
navigating the vehicle within a parking area using commands from the ADS (“The instruction receiving unit 41c receives an instruction signal transmitted from the mobile device 3 by the user outside the vehicle, the instruction signal being indicative of adjusting the parking position of the vehicle 2” [0047]. “The movement control unit 41d controls the vehicle moving apparatus 5 (to be described later) such that the parking position of the vehicle 2 is adjusted” [0048]. Thus, the vehicle navigates at a parking position, i.e., within a parking area using commands from the vehicle control system, i.e., ADS.),
wherein the gesture command is associated with one of parking and unparking the vehicle (As described above, the gesture command, i.e., instruction from the mobile device according to the gesture methods described in the rejection of claim 1, is associated with parking such that commands initiate the adjustment of a parking position.).
Regarding claim 15, Kiriya as modified by Donald and Iida teaches the method of claim 14.
Kiriya in view of Iida further teaches …wherein the corrective response to the obstacle satisfying the parameter further includes:
delaying the incomplete task until the vehicle state changes (Kiriya teaches a looped feedback which receives a gesture command before controlling the vehicle based on the instruction received from the gesture. The vehicle stops, i.e., delays, the parking adjustment task as a result of the obstacle. Such obstacle delay will continue until the user either initiates an end signal (S2130 of Fig. 16) or an instruction which does not interfere with the obstacle (S2090-S2110) is received, i.e., a vehicle state regarding the performed movement is changed. Iida teaches a similar control loop in which while the vehicle is in the process of parking, there is an evaluation for an obstacle (S107 and S112 of Fig. 9). Given the halting response of Kiriya it would be obvious that the corrective response of Iida would continually delay the parking task according to an obstacle requiring a corrective response until a corrective response is received to avoid the obstacle, i.e., the vehicle path state is corrected, or the parking space has been reached, i.e., the vehicle path is complete. Thus, Kiriya and Iida both teach control loops that in combination delay the incomplete movement task of parking that has been halted by an obstacle until the vehicle determines a movement in which no obstacle exists or that the parking task is complete.), wherein the vehicle state is anticipated (Given the predetermined parking path as taught by Iida, the vehicle’s movement instruction, i.e., vehicle state, is anticipated.) and the corrective response is a hand gesture by the user that is different than the gesture command (Kiriya teaches the gesture commands as parking adjustment instructions. As such, given that Iida teaches the corrective response being that which is an automated reconfiguration of the parking path, i.e., a command which is different than the initial command, Kiriya’s support regarding a looped control for receiving another gesture for adjustment would indicate a corrective response as a gesture different from the initial gesture command which caused the obstacle interference.).
Therefore, this combination of teachings regarding Kiriya and Iida is motivated by the same combination of known methods and prior art elements that yields a reasonable expectation of success as was described in the rejection of claim 1 (see MPEP 2143.I(A)).
Regarding claim 16, Kiriya as modified by Donald and Iida teaches the method of claim 12.
Iida further teaches …wherein notifying the user further includes:
generating an alert associated with the obstacle (“The corrected path image 121a may include a message M1 for explaining to the driver a reason for changing the travel path 80. In the example illustrated in FIG. 7, the display control module 109 displays the message M1 “Correct a path because an obstacle is present”” [0092]. Thus, a message, i.e., alert, associated with the obstacle is generated on a smartphone display.), the alert is one of flashing headlights, honking, a verbal alarm, a signal for a wireless device of the user, and a picture for the wireless device (Figure 10 shows the alert described above as one of a signal for a wireless device of the user in addition to a picture for the wireless device.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to have modified the obstacle determination system of Kiriya to include the specifics of obstacle notifications as taught by Iida with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification because notifying the user with a message and picture of the obstacle and associated correction keeps the user informed regarding any changes to the originally commanded plan for parking the vehicle and allows the user to respond to such automatic corrective actions in the event that the user wishes to issue an alternative command.
Regarding claim 17, Kiriya as modified by Donald and Iida teaches the method of claim 16,
with Kiriya in view of Iida further teaching …wherein the corrective response to the obstacle satisfying the parameter further includes:
receiving a vehicle command from the user according to the vehicle state and the alert (Kiriya teaches receiving a new gesture command during vehicle control each time an end signal is not received (see S2130 of Fig. 16). Given the combined teachings regarding the alert of an obstacle and replanning according to Iida, the user would then initiate the next command according to whether or not the obstacle exists and which gesture would not be interfered with the obstacle. Such a command, as taught by Kiriya, would be determined according to the performed movement, i.e., vehicle state, which triggered the obstacle detection.), the vehicle command is different than the gesture command (It would be obvious to one of ordinary skill in the art that in order to avoid the obstacle which was triggered by the initial gesture command, the new vehicle command would need to be different than the initial gesture command in order to avoid the same obstacle being detected again in S2110 of Fig. 16 of Kiriya once the new movement instruction is received in S2090 of Fig. 16.).
Regarding claim 19, Kiriya as modified by Donald and Iida teaches the method of claim 12, with modifying teachings of Iida further demonstrating
…the obstacle is one of a wall and a person within a boundary area around the vehicle (“The correction module 107 corrects the travel path 80 such that distances d3 and d5 to the trash box 30b and the other unknown object 31 are longer than a threshold while maintaining a state in which distances d1, d2, and d4 to the sidewalls 20 and 21 are longer than the threshold and generates the corrected path 81” [0097]. Thus, the sidewalls are considered as obstacles within the boundary area around the vehicle and must be accounted for when evaluating the threshold distance.); and
the obstacle is proximate to one of a door and a tailgate associated with the vehicle (As shown in Fig. 8, the sidewall and other obstacles are proximate to the door of the vehicle, as demonstrated by distances d1-d4.).
Claims 7 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Kiriya in view of Donald, further in view of Iida, and further in view of Google for Developers (“Gesture recognition - ML on Raspberry Pi with MediaPipe Series”, 2023).
Regarding claim 7, Kiriya in view of Donald and Iida teaches the detection system of claim 5,
with Kiriya further teaching, wherein the instructions to detect the gesture command for the automated action further include instructions to:
infer by a learning model a feature of the gesture command using the sensor data, the learning model is trained with data about the user and the vehicle state (The first and second recognition units which are described in [0148-0150] use learning models based on the acceleration and image data, wherein the recognition occurs through motions associated with modes of acceleration (acceleration data) and pattern matching (image data) to determine the movement instruction, i.e., vehicle state, associated with data about the user motion.)…
However, Kiriya does not explicitly teach …using object perception and semantic labelling.
Google for Developers, pertinent to the problem at hand, teaches … infer by a learning model a feature of the gesture command using the sensor data, the learning model is trained …using object perception and semantic labelling (The video (relevant screenshots attached) shows a process for identifying the hand being used for the gesture command (object perception), determining features of the hand to interpret the gesture, and using these features as land marks for labelled gesture definitions/meanings (semantic labelling). In an example, the presenter shows how the labelled object perception is used to control the movement of a small motor.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the gesture recognition of Kiriya to include object perception and semantic labelling techniques as taught by Google for Developers with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification because by using semantic labels, such as the labels which indicate the respective motion requirements as taught by Kiriya, the motion recognition of the image data may be efficiently determined by using essential landmark features of the user and user device as taught by Google for Developers.
Regarding claim 18, Kiriya as modified by Donald and Iida teaches the method of claim 16,
with Kiriya further teaching …wherein detecting the gesture command for the automated action further includes:
inferring by a learning model a feature of the gesture command using the sensor data, the learning model is trained with data about the user and the vehicle state (The first and second recognition units which are described in [0148-0150] use learning models based on the acceleration and image data, wherein the recognition occurs through motions associated with modes of acceleration (acceleration data) and pattern matching (image data) to determine the movement instruction, i.e., vehicle state, associated with data about the user motion.)…
However, Kiriya does not explicitly teach …using object perception and semantic labelling.
Google for Developers, pertinent to the problem at hand, teaches … inferring by a learning model a feature of the gesture command using the sensor data, the learning model is trained …using object perception and semantic labelling (The video (relevant screenshots attached) shows a process for identifying the hand being used for the gesture command (object perception), determining features of the hand to interpret the gesture, and using these features as land marks for labelled gesture definitions/meanings (semantic labelling). In an example, the presenter shows how the labelled object perception is used to control the movement of a small motor.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the gesture recognition of Kiriya to include object perception and semantic labelling techniques as taught by Google for Developers with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification because by using semantic labels, such as the labels which indicate the respective motion requirements as taught by Kiriya, the motion recognition of the image data may be efficiently determined by using essential landmark features of the user and user device as taught by Google for Developers.
Claims 9 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kiriya in view of Donald, further in view of Iida, and further in view of Hara et al. (US 2020/0307635 A1).
Regarding claim 9, Kiriya as modified by Donald and Iida teaches the detection system of claim 1.
However, Kiriya as modified does not explicitly teach wherein the vehicle state is one of an open window, objects left in the vehicle, a person occupying the vehicle, an operator walking away from the vehicle, an authorized person outside the vehicle, and a weather forecast, and
the parameter includes factoring a change from the vehicle state.
Hara, pertinent to the problem at hand, teaches …wherein the vehicle state is one of an open window, objects left in the vehicle, a person occupying the vehicle, an operator walking away from the vehicle, an authorized person outside the vehicle, and a weather forecast (Fig. 9 shows example of method in which an open window, an object left in the vehicle (lost article), a user/operator moving away from the vehicle, or a weather forecast (falling matter) as vehicle states which impact the autonomous parking function.), and
the parameter includes factoring a change from the vehicle state (The change of the status of one of these states such as a user proximate to the vehicle moving away from the vehicle, an open window closing, weather conditions, or the lost article being removed from the vehicle will progress the parking operation such that automatic parking may begin or resume once such change occurs.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the vehicle state and parameter from the system of Kiriya to additionally include those states and parameters which halt or pause a parking operation as taught by Hara with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification because such vehicle states reduce the likelihood of property damage or theft, thereby increasing user satisfaction of the automatic parking features.
Regarding claim 20, Kiriya as modified by Donald and Iida teaches the method of claim 12.
However, Kiriya as modified does not explicitly teach wherein the vehicle state is one of an open window, objects left in the vehicle, a person occupying the vehicle, an operator walking away from the vehicle, an authorized person outside the vehicle, and a weather forecast, and
the parameter includes factoring a change from the vehicle state.
Hara, pertinent to the problem at hand, teaches …wherein the vehicle state is one of an open window, objects left in the vehicle, a person occupying the vehicle, an operator walking away from the vehicle, an authorized person outside the vehicle, and a weather forecast (Fig. 9 shows example of method in which an open window, an object left in the vehicle (lost article), a user/operator moving away from the vehicle, or a weather forecast (falling matter) as vehicle states which impact the autonomous parking function.), and
the parameter includes factoring a change from the vehicle state (The change of the status of one of these states such as a user proximate to the vehicle moving away from the vehicle, an open window closing, weather conditions, or the lost article being removed from the vehicle will progress the parking operation such that automatic parking may begin or resume once such change occurs.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the vehicle state and parameter from the system of Kiriya to additionally include those states and parameters which halt or pause a parking operation as taught by Hara with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification because such vehicle states reduce the likelihood of property damage or theft, thereby increasing user satisfaction of the automatic parking features.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See additional references cited in “Notice of References Cited” (PTO Form 892) which are specifically relevant pertaining to the amended limitations.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIDNEY L MOLNAR whose telephone number is (571)272-2276. The examiner can normally be reached 8 A.M. to 3 P.M. EST Monday-Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jonathan (Wade) Miles can be reached at (571) 270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.L.M./Examiner, Art Unit 3656
/WADE MILES/Supervisory Patent Examiner, Art Unit 3656