Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Objections
Claims 4 and 17 are objected to because of the following informalities: “the image data in to identify an emotional state” should read as, “the image data in order to identify an emotional state”. Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 6, 12, 13, 15, 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Han et al. (US 20150131857 A1).
In regards to claim 1, Han teaches a method for automated operation of a vehicle responsive to collected sensor data, the method comprising: receiving image data from an optical sensor within the vehicle while a person is in the vehicle (Paragraphs 8, 13)
In accordance with one aspect of the present invention, a vehicle may include an image capturing unit (e.g., imaging device, camera, etc.) mounted within the vehicle and configured to capture a gesture image of a gesture area including a driver gesture or a passenger gesture, an image analysis unit configured to detect an object of interest in the gesture image captured by the image capturing unit and determine whether the object of interest is related to the driver, and a controller configured to recognize a gesture expressed by the object of interest and generate a control signal that corresponds to the gesture when the object of interest is related to the driver.[P-8]
In accordance with another aspect of the present invention, a method for controlling a vehicle may include capturing, by an imaging device, a gesture image of a gesture area including a driver gesture or a passenger gesture, detecting, by a controller, an object of interest in the captured gesture image of the gesture area, determining, by the controller, whether the object of interest belongs to the driver, and recognizing, by the controller, a gesture expressed by the object of interest and generating, by the controller, a control signal that corresponds to the gesture when the object of interest belongs to the driver.[P-13]
Han teaches analyzing the image data in comparison to stored gesture data to recognize a gesture of the person, wherein the stored gesture data identifies a plurality of gestures that respectively correspond to a plurality of actions, wherein the plurality of gestures includes the gesture (Paragraphs 12, 66, 69)
The vehicle may further include a memory configured to store specific gestures and specific operations in a mapping mode. The controller may be configured to search the memory for a specific gesture that corresponds to the gesture expressed by the object of interest, and generate a control signal to execute a specific operation mapped to a detected specific gesture. The memory may be executed by the controller to store a specific gesture and an operation to change gesture recognition authority in a mapping mode. The controller may be configured to generate a control signal to change the gesture recognition authority when the gesture expressed by the object of interest corresponds to the specific gesture. The changing of the gesture recognition authority may include extending a holder of the gesture recognition authority to the passenger, and restricting the holder of the gesture recognition authority to the driver.[P-12]
The controller 131 may be configured to recognize a gesture expressed by the object of interest, using at least one of known gesture recognition technologies. For example, when a motion expressed by the hand of the driver is recognized, a motion pattern that indicates a motion of the hand may be detected from the gesture image, and whether the detected motion pattern corresponds to a motion pattern stored in the memory 132 may be determined. To determine the correspondence between the two patterns, the controller 131 may use one of various algorithms such as Dynamic Time Warping (DTW) and Hidden Markov Model (HMM). The memory 132 may be configured to store specific gestures and events that correspond to the gestures, in a mapping mode. Accordingly, the controller 131 may be configured to search the memory 132 for a specific gesture that corresponds to the gesture recognized in the gesture image, and generate a control signal to execute an event that corresponds to a detected specific gesture. A detailed description is now given of operation of the controller 131 with reference to FIGS. 10 and 11.[P-66]
Various types of gestures mapped to different operations of the AVN device 140 may be stored in the memory 132. For example, gesture 1 may be mapped to an operation to turn on the audio function, gesture 2 may be mapped to (e.g., may correspond to) an operation to turn on the video function, and gesture 3 may be mapped to an operation to turn on the navigation function. When the gesture recognized by the controller 131 is gesture 1, the controller 131 may be configured to generate a control signal to turn on the audio function and transmit the control signal to the AVN device 140. When the gesture recognized by the controller 131 is gesture 2 or gesture 3, the controller 131 may be configured to generate a control signal to turn on the video function or the navigation function and transmit the control signal to the AVN device 140 [P-69]
Here, we see that various types of gestures stored in the memory of the vehicle. Although Han fails to teach the comparison to stored gesture data to recognize the gesture of the person, by mapping the recorded image gesture to stored gestures within the memory of the vehicle to find a match would obviously require a comparison means of the image in order to find a mapping match between the gestures.
Han then teaches identifying, based on the stored gesture data, an action that corresponds to the gesture, wherein the plurality of actions includes the action; and automatically causing the action to be performed in association with the vehicle in response to recognizing the gesture (Paragraph 69).
Various types of gestures mapped to different operations of the AVN device 140 may be stored in the memory 132. For example, gesture 1 may be mapped to an operation to turn on the audio function, gesture 2 may be mapped to (e.g., may correspond to) an operation to turn on the video function, and gesture 3 may be mapped to an operation to turn on the navigation function. When the gesture recognized by the controller 131 is gesture 1, the controller 131 may be configured to generate a control signal to turn on the audio function and transmit the control signal to the AVN device 140. When the gesture recognized by the controller 131 is gesture 2 or gesture 3, the controller 131 may be configured to generate a control signal to turn on the video function or the navigation function and transmit the control signal to the AVN device 140 [P-69]
In regards to claim 6, Han teaches causing the action to be performed includes causing the vehicle to automatically control at least an aspect of the vehicle to modify navigation of the vehicle (Paragraphs 46, 69, 70)
The user may manipulate the AVN input unit 142 to input a command to operate the AVN device 140. The AVN input unit 142 may be disposed near (e.g., adjacent to) the AVN display 141 in the form of hard keys as illustrated in FIG. 3. Alternatively, when the AVN display 141 is implemented as a touchscreen, the AVN display 141 may further function as the AVN input unit 142. A speaker 143 configured to output sound may be disposed within the vehicle 100, and sound necessary for audio, video and navigation functions may be output from the speaker 143.[P-46]
Various types of gestures mapped to different operations of the AVN device 140 may be stored in the memory 132. For example, gesture 1 may be mapped to an operation to turn on the audio function, gesture 2 may be mapped to (e.g., may correspond to) an operation to turn on the video function, and gesture 3 may be mapped to an operation to turn on the navigation function. When the gesture recognized by the controller 131 is gesture 1, the controller 131 may be configured to generate a control signal to turn on the audio function and transmit the control signal to the AVN device 140. When the gesture recognized by the controller 131 is gesture 2 or gesture 3, the controller 131 may be configured to generate a control signal to turn on the video function or the navigation function and transmit the control signal to the AVN device 140.[P-69]
Alternatively, when at least two of the audio, video and navigation functions are performed, a specific gesture and an operation to switch a screen displayed on the AVN display 141 may be stored in a mapping mode. For example, an operation to switch to an audio screen may be mapped to gesture 4, and an operation to switch to a navigation screen may be mapped to gesture 5. Accordingly, when the gesture recognized by the controller 131 is gesture 4, the controller 131 may be configured to generate a control signal to switch the screen displayed on the AVN display 141 to the audio screen and transmit the control signal to the AVN device 140. When the gesture recognized by the controller 131 is gesture 5, the controller 131 may be configured to generate a control signal to switch the screen displayed on the AVN display 141 to the navigation screen and transmit the control signal to the AVN device 140.[P-70]
In regards to claim 12, Han teaches the gesture is associated with a position of at least a portion of the person (Paragraphs 9-11)
The image analysis unit may be configured to extract a pattern of interest with respect to the object of interest and determine whether the pattern of interest has a predefined feature. The image analysis unit may also be configured to determine that the object of interest is related to the driver (e.g., is that of the driver and not the passenger) when the pattern of interest has the predefined feature. The object of interest may be an arm or a hand of a person. The pattern of interest may include a wrist connection pattern formed by connecting an end of the arm and a wrist which is a connection part between the arm and the hand.[P-9]
The predefined feature may include a feature in which the wrist connection pattern starts from a left or right side of the gesture area. When the vehicle is a left hand drive (LHD) vehicle, the image analysis unit may be configured to determine that the object of interest belongs to the driver when the wrist connection pattern starts from the left side of the gesture area. When the vehicle is a right hand drive (RHD) vehicle, the image analysis unit may be configured to determine that the object of interest belongs to the driver when the wrist connection pattern starts from the right side of the gesture area.[P-10]
The pattern of interest may include a first finger pattern formed by connecting a wrist which is a connection part between the arm and the hand, and a thumb end of the hand, and a second finger pattern formed by connecting the wrist and another finger end of the hand. The predefined feature may include a feature in which the first finger pattern is located at a left or right side of the second finger pattern. When the vehicle is an LHD vehicle, the image analysis unit may be configured to determine that the object of interest belongs to the driver when the first finger pattern is located at the left side of the second finger pattern. When the vehicle is an RHD vehicle, the image analysis unit may be configured to determine that the object of interest belongs to the driver if the first finger pattern is located at the right side of the second finger pattern.[P-11]
In regards to claim 13, Han teaches the gesture is associated with a movement of at least a portion of the person(Paragraphs 9-11)
The image analysis unit may be configured to extract a pattern of interest with respect to the object of interest and determine whether the pattern of interest has a predefined feature. The image analysis unit may also be configured to determine that the object of interest is related to the driver (e.g., is that of the driver and not the passenger) when the pattern of interest has the predefined feature. The object of interest may be an arm or a hand of a person. The pattern of interest may include a wrist connection pattern formed by connecting an end of the arm and a wrist which is a connection part between the arm and the hand.[P-9]
The predefined feature may include a feature in which the wrist connection pattern starts from a left or right side of the gesture area. When the vehicle is a left hand drive (LHD) vehicle, the image analysis unit may be configured to determine that the object of interest belongs to the driver when the wrist connection pattern starts from the left side of the gesture area. When the vehicle is a right hand drive (RHD) vehicle, the image analysis unit may be configured to determine that the object of interest belongs to the driver when the wrist connection pattern starts from the right side of the gesture area.[P-10]
The pattern of interest may include a first finger pattern formed by connecting a wrist which is a connection part between the arm and the hand, and a thumb end of the hand, and a second finger pattern formed by connecting the wrist and another finger end of the hand. The predefined feature may include a feature in which the first finger pattern is located at a left or right side of the second finger pattern. When the vehicle is an LHD vehicle, the image analysis unit may be configured to determine that the object of interest belongs to the driver when the first finger pattern is located at the left side of the second finger pattern. When the vehicle is an RHD vehicle, the image analysis unit may be configured to determine that the object of interest belongs to the driver if the first finger pattern is located at the right side of the second finger pattern.[P-11]
In regards to claim 15, Hans teaches a system for automated operation of a vehicle responsive to collected sensor data, the system comprising: at least one memory; and at least one processor, wherein execution of instructions stored in the at least one memory by the at least one processor causes the at least one processor to: receive image data from an optical sensor within the vehicle while a person is in the vehicle (Paragraphs 8, 12,13)
In accordance with one aspect of the present invention, a vehicle may include an image capturing unit (e.g., imaging device, camera, etc.) mounted within the vehicle and configured to capture a gesture image of a gesture area including a driver gesture or a passenger gesture, an image analysis unit configured to detect an object of interest in the gesture image captured by the image capturing unit and determine whether the object of interest is related to the driver, and a controller configured to recognize a gesture expressed by the object of interest and generate a control signal that corresponds to the gesture when the object of interest is related to the driver.[P-8]
The vehicle may further include a memory configured to store specific gestures and specific operations in a mapping mode. The controller may be configured to search the memory for a specific gesture that corresponds to the gesture expressed by the object of interest, and generate a control signal to execute a specific operation mapped to a detected specific gesture. The memory may be executed by the controller to store a specific gesture and an operation to change gesture recognition authority in a mapping mode. The controller may be configured to generate a control signal to change the gesture recognition authority when the gesture expressed by the object of interest corresponds to the specific gesture. The changing of the gesture recognition authority may include extending a holder of the gesture recognition authority to the passenger, and restricting the holder of the gesture recognition authority to the driver.[P-12]
In accordance with another aspect of the present invention, a method for controlling a vehicle may include capturing, by an imaging device, a gesture image of a gesture area including a driver gesture or a passenger gesture, detecting, by a controller, an object of interest in the captured gesture image of the gesture area, determining, by the controller, whether the object of interest belongs to the driver, and recognizing, by the controller, a gesture expressed by the object of interest and generating, by the controller, a control signal that corresponds to the gesture when the object of interest belongs to the driver.[P-13]
Hans also teaches analyzing the image data in comparison to stored gesture data to recognize a gesture of the person, wherein the stored gesture data identifies a plurality of gestures that respectively correspond to a plurality of actions, wherein the plurality of gestures includes the gesture (Paragraphs 12, 66, 69)
The vehicle may further include a memory configured to store specific gestures and specific operations in a mapping mode. The controller may be configured to search the memory for a specific gesture that corresponds to the gesture expressed by the object of interest, and generate a control signal to execute a specific operation mapped to a detected specific gesture. The memory may be executed by the controller to store a specific gesture and an operation to change gesture recognition authority in a mapping mode. The controller may be configured to generate a control signal to change the gesture recognition authority when the gesture expressed by the object of interest corresponds to the specific gesture. The changing of the gesture recognition authority may include extending a holder of the gesture recognition authority to the passenger, and restricting the holder of the gesture recognition authority to the driver.[P-12]
The controller 131 may be configured to recognize a gesture expressed by the object of interest, using at least one of known gesture recognition technologies. For example, when a motion expressed by the hand of the driver is recognized, a motion pattern that indicates a motion of the hand may be detected from the gesture image, and whether the detected motion pattern corresponds to a motion pattern stored in the memory 132 may be determined. To determine the correspondence between the two patterns, the controller 131 may use one of various algorithms such as Dynamic Time Warping (DTW) and Hidden Markov Model (HMM). The memory 132 may be configured to store specific gestures and events that correspond to the gestures, in a mapping mode. Accordingly, the controller 131 may be configured to search the memory 132 for a specific gesture that corresponds to the gesture recognized in the gesture image, and generate a control signal to execute an event that corresponds to a detected specific gesture. A detailed description is now given of operation of the controller 131 with reference to FIGS. 10 and 11.[P-66]
Various types of gestures mapped to different operations of the AVN device 140 may be stored in the memory 132. For example, gesture 1 may be mapped to an operation to turn on the audio function, gesture 2 may be mapped to (e.g., may correspond to) an operation to turn on the video function, and gesture 3 may be mapped to an operation to turn on the navigation function. When the gesture recognized by the controller 131 is gesture 1, the controller 131 may be configured to generate a control signal to turn on the audio function and transmit the control signal to the AVN device 140. When the gesture recognized by the controller 131 is gesture 2 or gesture 3, the controller 131 may be configured to generate a control signal to turn on the video function or the navigation function and transmit the control signal to the AVN device 140 [P-69]
Here, we see that various types of gestures stored in the memory of the vehicle. Although Hans fails to teach the comparison to stored gesture data to recognize the gesture of the person, by mapping the recorded image gesture to stored gestures within the memory of the vehicle to find a match would obviously require a comparison means of the image in order to find a mapping match between the gestures.
Hans also teaches identifying, based on the stored gesture data, an action that corresponds to the gesture, wherein the plurality of actions includes the action; and automatically cause the action to be performed in association with the vehicle in response to recognizing the gesture (Paragraph 69).
Various types of gestures mapped to different operations of the AVN device 140 may be stored in the memory 132. For example, gesture 1 may be mapped to an operation to turn on the audio function, gesture 2 may be mapped to (e.g., may correspond to) an operation to turn on the video function, and gesture 3 may be mapped to an operation to turn on the navigation function. When the gesture recognized by the controller 131 is gesture 1, the controller 131 may be configured to generate a control signal to turn on the audio function and transmit the control signal to the AVN device 140. When the gesture recognized by the controller 131 is gesture 2 or gesture 3, the controller 131 may be configured to generate a control signal to turn on the video function or the navigation function and transmit the control signal to the AVN device 140 [P-69]
In regards to claim 18, Han teaches causing the action to be performed includes causing the vehicle to automatically control at least an aspect of the vehicle to modify navigation of the vehicle(Paragraphs 46, 69, 70)
The user may manipulate the AVN input unit 142 to input a command to operate the AVN device 140. The AVN input unit 142 may be disposed near (e.g., adjacent to) the AVN display 141 in the form of hard keys as illustrated in FIG. 3. Alternatively, when the AVN display 141 is implemented as a touchscreen, the AVN display 141 may further function as the AVN input unit 142. A speaker 143 configured to output sound may be disposed within the vehicle 100, and sound necessary for audio, video and navigation functions may be output from the speaker 143.[P-46]
Various types of gestures mapped to different operations of the AVN device 140 may be stored in the memory 132. For example, gesture 1 may be mapped to an operation to turn on the audio function, gesture 2 may be mapped to (e.g., may correspond to) an operation to turn on the video function, and gesture 3 may be mapped to an operation to turn on the navigation function. When the gesture recognized by the controller 131 is gesture 1, the controller 131 may be configured to generate a control signal to turn on the audio function and transmit the control signal to the AVN device 140. When the gesture recognized by the controller 131 is gesture 2 or gesture 3, the controller 131 may be configured to generate a control signal to turn on the video function or the navigation function and transmit the control signal to the AVN device 140.[P-69]
Alternatively, when at least two of the audio, video and navigation functions are performed, a specific gesture and an operation to switch a screen displayed on the AVN display 141 may be stored in a mapping mode. For example, an operation to switch to an audio screen may be mapped to gesture 4, and an operation to switch to a navigation screen may be mapped to gesture 5. Accordingly, when the gesture recognized by the controller 131 is gesture 4, the controller 131 may be configured to generate a control signal to switch the screen displayed on the AVN display 141 to the audio screen and transmit the control signal to the AVN device 140. When the gesture recognized by the controller 131 is gesture 5, the controller 131 may be configured to generate a control signal to switch the screen displayed on the AVN display 141 to the navigation screen and transmit the control signal to the AVN device 140.[P-70]
In regards to claim 19, Han teaches the gesture is associated with a position of at least a portion of the person(Paragraphs 9-11)
The image analysis unit may be configured to extract a pattern of interest with respect to the object of interest and determine whether the pattern of interest has a predefined feature. The image analysis unit may also be configured to determine that the object of interest is related to the driver (e.g., is that of the driver and not the passenger) when the pattern of interest has the predefined feature. The object of interest may be an arm or a hand of a person. The pattern of interest may include a wrist connection pattern formed by connecting an end of the arm and a wrist which is a connection part between the arm and the hand.[P-9]
The predefined feature may include a feature in which the wrist connection pattern starts from a left or right side of the gesture area. When the vehicle is a left hand drive (LHD) vehicle, the image analysis unit may be configured to determine that the object of interest belongs to the driver when the wrist connection pattern starts from the left side of the gesture area. When the vehicle is a right hand drive (RHD) vehicle, the image analysis unit may be configured to determine that the object of interest belongs to the driver when the wrist connection pattern starts from the right side of the gesture area.[P-10]
The pattern of interest may include a first finger pattern formed by connecting a wrist which is a connection part between the arm and the hand, and a thumb end of the hand, and a second finger pattern formed by connecting the wrist and another finger end of the hand. The predefined feature may include a feature in which the first finger pattern is located at a left or right side of the second finger pattern. When the vehicle is an LHD vehicle, the image analysis unit may be configured to determine that the object of interest belongs to the driver when the first finger pattern is located at the left side of the second finger pattern. When the vehicle is an RHD vehicle, the image analysis unit may be configured to determine that the object of interest belongs to the driver if the first finger pattern is located at the right side of the second finger pattern.[P-11]
.
In regards to claim 20, Han teaches a non-transitory computer-readable storage medium having embodied thereon a program executable by a processor for implementing a method for automated operation of a vehicle responsive to collected sensor data, the method comprising: receiving image data from an optical sensor within the vehicle while a person is in the vehicle(Paragraphs 8, 13)
In accordance with one aspect of the present invention, a vehicle may include an image capturing unit (e.g., imaging device, camera, etc.) mounted within the vehicle and configured to capture a gesture image of a gesture area including a driver gesture or a passenger gesture, an image analysis unit configured to detect an object of interest in the gesture image captured by the image capturing unit and determine whether the object of interest is related to the driver, and a controller configured to recognize a gesture expressed by the object of interest and generate a control signal that corresponds to the gesture when the object of interest is related to the driver.[P-8]
In accordance with another aspect of the present invention, a method for controlling a vehicle may include capturing, by an imaging device, a gesture image of a gesture area including a driver gesture or a passenger gesture, detecting, by a controller, an object of interest in the captured gesture image of the gesture area, determining, by the controller, whether the object of interest belongs to the driver, and recognizing, by the controller, a gesture expressed by the object of interest and generating, by the controller, a control signal that corresponds to the gesture when the object of interest belongs to the driver.[P-13]
Han then teaches analyzing the image data in comparison to stored gesture data to recognize a gesture of the person, wherein the stored gesture data identifies a plurality of gestures that respectively correspond to a plurality of actions, wherein the plurality of gestures includes the gesture (Paragraphs 12, 66, 69)
The vehicle may further include a memory configured to store specific gestures and specific operations in a mapping mode. The controller may be configured to search the memory for a specific gesture that corresponds to the gesture expressed by the object of interest, and generate a control signal to execute a specific operation mapped to a detected specific gesture. The memory may be executed by the controller to store a specific gesture and an operation to change gesture recognition authority in a mapping mode. The controller may be configured to generate a control signal to change the gesture recognition authority when the gesture expressed by the object of interest corresponds to the specific gesture. The changing of the gesture recognition authority may include extending a holder of the gesture recognition authority to the passenger, and restricting the holder of the gesture recognition authority to the driver.[P-12]
The controller 131 may be configured to recognize a gesture expressed by the object of interest, using at least one of known gesture recognition technologies. For example, when a motion expressed by the hand of the driver is recognized, a motion pattern that indicates a motion of the hand may be detected from the gesture image, and whether the detected motion pattern corresponds to a motion pattern stored in the memory 132 may be determined. To determine the correspondence between the two patterns, the controller 131 may use one of various algorithms such as Dynamic Time Warping (DTW) and Hidden Markov Model (HMM). The memory 132 may be configured to store specific gestures and events that correspond to the gestures, in a mapping mode. Accordingly, the controller 131 may be configured to search the memory 132 for a specific gesture that corresponds to the gesture recognized in the gesture image, and generate a control signal to execute an event that corresponds to a detected specific gesture. A detailed description is now given of operation of the controller 131 with reference to FIGS. 10 and 11.[P-66]
Various types of gestures mapped to different operations of the AVN device 140 may be stored in the memory 132. For example, gesture 1 may be mapped to an operation to turn on the audio function, gesture 2 may be mapped to (e.g., may correspond to) an operation to turn on the video function, and gesture 3 may be mapped to an operation to turn on the navigation function. When the gesture recognized by the controller 131 is gesture 1, the controller 131 may be configured to generate a control signal to turn on the audio function and transmit the control signal to the AVN device 140. When the gesture recognized by the controller 131 is gesture 2 or gesture 3, the controller 131 may be configured to generate a control signal to turn on the video function or the navigation function and transmit the control signal to the AVN device 140 [P-69]
Here, we see that various types of gestures stored in the memory of the vehicle. Although Hans fails to teach the comparison to stored gesture data to recognize the gesture of the person, by mapping the recorded image gesture to stored gestures within the memory of the vehicle to find a match would obviously require a comparison means of the image in order to find a mapping match between the gestures.
Furthermore, Han teaches identifying, based on the stored gesture data, an action that corresponds to the gesture, wherein the plurality of actions includes the action; and automatically causing the action to be performed in association with the vehicle in response to recognizing the gesture(Paragraph 69).
Various types of gestures mapped to different operations of the AVN device 140 may be stored in the memory 132. For example, gesture 1 may be mapped to an operation to turn on the audio function, gesture 2 may be mapped to (e.g., may correspond to) an operation to turn on the video function, and gesture 3 may be mapped to an operation to turn on the navigation function. When the gesture recognized by the controller 131 is gesture 1, the controller 131 may be configured to generate a control signal to turn on the audio function and transmit the control signal to the AVN device 140. When the gesture recognized by the controller 131 is gesture 2 or gesture 3, the controller 131 may be configured to generate a control signal to turn on the video function or the navigation function and transmit the control signal to the AVN device 140 [P-69]
Claim(s) 2, 3, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Han et al. (US 20150131857 A1) in view of Ng-Thow-Hing et al. (US 20170076415 A1)
In regards to claim 2, Han fails to teach a database includes the stored gesture data, and wherein identifying the action includes querying the database using a query to retrieve the action from a record in the database, wherein the query is associated with the gesture, and wherein the record is associated with the gesture.
Ng-Thow-Hing on the other hand teaches a database includes the stored gesture data, and wherein identifying the action includes querying the database using a query to retrieve the action from a record in the database, wherein the query is associated with the gesture, and wherein the record is associated with the gesture (Paragraphs 23, 48)
The action component 140 correlates the identified physical gesture with the action to be performed. For example, the action component 140 may include a database linking physical gestures with actions. In another embodiment, the action component 140 may attempt to infer a desired action from the physical gesture. The action component 140 then manages the captured image by causing the action to be performed on the captured image.[P-23]
The action component 730 is a means correlating a physical gesture to actions. For example, the action component 730 may include a database, table, mapping engine and so on for associating physical gestures to action. The action component 730 is also a means for performing the action. In one embodiment, the action component 730 includes the means for managing data such as images associated with the POIs and associated metadata.[P-48]
Here we see Ng-Thow-Hing teaching linking a recorded physical gesture with database stored gestures using a query and mapping engine to match the gesture to action results.
Therefore, it would have been obvious during the time of the filing date of the said invention to combine Ng-Thow-Hing teaching with Han’s teaching in order to enable a more effective method to parse and match captured gestures with stored actions accordingly.
In regards to claim 3, Han teaches wherein the action is based on the gesture and the audio data(Paragraph 69).
Various types of gestures mapped to different operations of the AVN device 140 may be stored in the memory 132. For example, gesture 1 may be mapped to an operation to turn on the audio function, gesture 2 may be mapped to (e.g., may correspond to) an operation to turn on the video function, and gesture 3 may be mapped to an operation to turn on the navigation function. When the gesture recognized by the controller 131 is gesture 1, the controller 131 may be configured to generate a control signal to turn on the audio function and transmit the control signal to the AVN device 140. When the gesture recognized by the controller 131 is gesture 2 or gesture 3, the controller 131 may be configured to generate a control signal to turn on the video function or the navigation function and transmit the control signal to the AVN device 140 [P-69]
Han fails to teach receiving audio data from a microphone that is associated with the vehicle, the audio data recorded while the person is in the vehicle,
Ng-Thow-Hing on the other hand teaches receiving audio data from a microphone that is associated with the vehicle, the audio data recorded while the person is in the vehicle (Paragraphs 21, 36, 47)
In response to viewing a captured image on the HUD 120, the user may perform actions, such as saving or deleting the captured image using a physical gesture. The sensor component 130 monitors the user's behavior for physical gestures. The physical gestures may include a variety of user actions such as verbal commands, gesture inputs (e.g., a swipe, a two-finger swipe, a pinch), and/or a facial movement. For example, the sensor component 130 may detect the user waving their finger in the exemplary manner shown in 135. The sensor component 130 may utilize imaging devices, facial recognition, gesture recognition, light sensors, microphones, audio sensors, and other equipment to facilitate detecting physical gestures.[P-21]
In another embodiment, the alert being triggered may result in an audio notification. In one embodiment, the audio notification may be an alarm. Alternatively, the audio notification may be a computer generated spoken alert. For example, when passing a POI, the name of the POI may be spoken with a general direction. For example, when passing a POI of a restaurant, the audio notification may be “Restaurant, on the right.” In another embodiment, a visual notification and audio notification may be used in combination in response to the alert being triggered.[P-36]
In one embodiment, the sensor component 725 is a means (e.g., hardware, non-transitory computer-readable medium, firmware) for detecting the physical behavior of a user so that the sensor component may detect when the user has performed a physical gesture. Accordingly, as discussed above, the sensor component may include a means for imaging, facial recognition, gesture recognition, light sensing, audio sensing, and other equipment to facilitate monitoring the user's behavior for physical gestures.[P-47]
Here, we see Ng-Thow-Hing receiving audio data from a microphone that is associated with the vehicle, in conjunction with the image gesture data, wherein the audio data recorded while the person is in the vehicle.
Therefore, it would be obvious to one of ordinary skill in the art during the time of the filing date of the invention to combine Ng-Thow-Hing’s teaching with Han’s teaching in order to better improve gesture and physiological specific recognition and control of a vehicle accordingly.
In regards to claim 16, Han teaches wherein the action is based on the gesture and the audio data(Paragraph 69).
Various types of gestures mapped to different operations of the AVN device 140 may be stored in the memory 132. For example, gesture 1 may be mapped to an operation to turn on the audio function, gesture 2 may be mapped to (e.g., may correspond to) an operation to turn on the video function, and gesture 3 may be mapped to an operation to turn on the navigation function. When the gesture recognized by the controller 131 is gesture 1, the controller 131 may be configured to generate a control signal to turn on the audio function and transmit the control signal to the AVN device 140. When the gesture recognized by the controller 131 is gesture 2 or gesture 3, the controller 131 may be configured to generate a control signal to turn on the video function or the navigation function and transmit the control signal to the AVN device 140 [P-69]
Han fails to teach receiving audio data from a microphone that is associated with the vehicle, the audio data recorded while the person is in the vehicle,
Ng-Thow-Hing on the other hand teaches the execution of instructions stored in the at least one memory by the at least one processor causes the at least one processor to receive audio data from a microphone that is associated with the vehicle, the audio data recorded while the person is in the vehicle (Paragraphs 21, 36, 47)
In response to viewing a captured image on the HUD 120, the user may perform actions, such as saving or deleting the captured image using a physical gesture. The sensor component 130 monitors the user's behavior for physical gestures. The physical gestures may include a variety of user actions such as verbal commands, gesture inputs (e.g., a swipe, a two-finger swipe, a pinch), and/or a facial movement. For example, the sensor component 130 may detect the user waving their finger in the exemplary manner shown in 135. The sensor component 130 may utilize imaging devices, facial recognition, gesture recognition, light sensors, microphones, audio sensors, and other equipment to facilitate detecting physical gestures.[P-21]
In another embodiment, the alert being triggered may result in an audio notification. In one embodiment, the audio notification may be an alarm. Alternatively, the audio notification may be a computer generated spoken alert. For example, when passing a POI, the name of the POI may be spoken with a general direction. For example, when passing a POI of a restaurant, the audio notification may be “Restaurant, on the right.” In another embodiment, a visual notification and audio notification may be used in combination in response to the alert being triggered.[P-36]
In one embodiment, the sensor component 725 is a means (e.g., hardware, non-transitory computer-readable medium, firmware) for detecting the physical behavior of a user so that the sensor component may detect when the user has performed a physical gesture. Accordingly, as discussed above, the sensor component may include a means for imaging, facial recognition, gesture recognition, light sensing, audio sensing, and other equipment to facilitate monitoring the user's behavior for physical gestures.[P-47]
Here, we see Ng-Thow-Hing receiving audio data from a microphone that is associated with the vehicle, in conjunction with the image gesture data, wherein the audio data recorded while the person is in the vehicle.
Therefore, it would be obvious to one of ordinary skill in the art during the time of the filing date of the invention to combine Ng-Thow-Hing’s teaching with Han’s teaching in order to better improve gesture and physiological specific recognition and control of a vehicle accordingly.
Claim(s) 4, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Han et al. (US 20150131857 A1) in view of Hoyos et al. (US 20150363986 A1)
In regards to claim 4, Han fails to teach analyzing the image data in order to identify an emotional state of the person, wherein the action is based on the gesture and the emotional state of the person.
Hoyos on the other hand teaches analyzing the image data in order to identify an emotional state of the person, wherein the action is based on the gesture and the emotional state of the person (Paragraph 142)
Safety monitoring based on biometric information can also be performed in view of environmental conditions detected by the computing device. For instance, information received from external cameras, position sensors, proximity sensors, accelerometers, on-board diagnostic (OBD) sensors and the like can be used to detect traffic congestion. Paired with abnormal biometric readings, such as elevated pulse, facial expressions indicative of stress, the on-board computer can automatically perform operations to alleviate the stress, such as play soothing music, provide a notification to the user reminding the user to remain calm, re-route the vehicle, automatically adjust the user's schedule, notify others accordingly and the like.[P-142]
It would have been obvious during the filing date of the invention to combine Hoyos teaching with Han’s teaching in order to enable improved safety monitoring and operation of a vehicle.
In regards to claim 17, Han fails to teach the execution of instructions stored in the at least one memory by the at least one processor causes the at least one processor to: analyze the image data in order to identify an emotional state of the person, wherein the action is based on the gesture and the emotional state of the person.
Hoyos on the other hand teaches the execution of instructions stored in the at least one memory by the at least one processor causes the at least one processor to analyze the image data in order to identify an emotional state of the person, wherein the action is based on the gesture and the emotional state of the person (Paragraph 142)
Safety monitoring based on biometric information can also be performed in view of environmental conditions detected by the computing device. For instance, information received from external cameras, position sensors, proximity sensors, accelerometers, on-board diagnostic (OBD) sensors and the like can be used to detect traffic congestion. Paired with abnormal biometric readings, such as elevated pulse, facial expressions indicative of stress, the on-board computer can automatically perform operations to alleviate the stress, such as play soothing music, provide a notification to the user reminding the user to remain calm, re-route the vehicle, automatically adjust the user's schedule, notify others accordingly and the like.[P-142]
It would have been obvious during the filing date of the invention to combine Hoyos teaching with Han’s teaching in order to enable improved safety monitoring and operation of a vehicle.
Claim(s) 5, 7, 8, 11, 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Han et al. (US 20150131857 A1) in view of Penilla et al. (US 20170061965 A1)
In regards to claim 5, Han fails to teach causing the action to be performed includes causing a recommendation to be output through a user interface.
Penilla the other hand teaches causing the action to be performed includes causing a recommendation to be output through a user interface (Paragraph 34).
In some implementations, the learning and predicting embodiments may utilize learning and prediction algorithms that are used in machine learning. In one embodiment, certain algorithms may look to patterns of input, inputs to certain user interfaces, inputs that can be identified to biometric patterns, inputs for neural network processing, inputs for machine learning (e.g., identifying relationships between inputs, and filtering based on geo-location and/or vehicle state, in real-time), logic for identifying or recommending a result or a next input, a next screen, a suggested input, suggested data that would be relevant for a particular time, geo-location, state of a vehicle, and/or combinations thereof. In one embodiment, use of machine learning enables the vehicle to learn what is needed by the user, at a particular time, in view of one or more operating/status state of the vehicle, in view of one or more state of one or more sensors of the vehicle. Thus, one or more inputs or data presented to the user may be provided without explicit input, request or programming by a user at that time. Overtime, machine learning can be used to reinforce learned behavior, which can provide weighting to certain inputs.[P-34]
Therefore, it would be obvious to combine Penilla’s teaching with Han’s teaching in order to enable a more effective optimized feedback according to a specific driver/operator of the vehicle.
In regards to claim 7, Han fails to teach causing the action to be performed includes rerouting the vehicle.
Penilla on the other hand teaches causing the action to be performed includes rerouting the vehicle (Paragraphs 232, 233)
In FIG. 16B, the displays are shown to be populated with information obtained by cloud services (or obtained by the vehicle, or obtained by a device of the user in the vehicle, or combinations of two or more thereof). The system may alert the user that an accident is up ahead. The user, based on account information (e.g., history of user, propensity, or likelihood), may usually select to re-route, so the system automatically provides a re-route in the map on the display. In one embodiment, data for information associated with the geo-location is sent to the vehicle when the profile of the user identifies likelihood for consumption of the information associated with the geo-location. An example may be, without limitation, a user drives by a Chevron gas station, but the user prefers Teds Gas, so the user will not stop, even though the vehicle needs gas and the user is proximate to Chevron. The user would be viewed to not have a likelihood to consume information regarding the nearby Chevron.[P-232]
If the user's shows that the user does not have appointments or does not urgently need to arrive at the destination, the system may not provide a re-route option if the extra distance is more than the user likes to drive. Other contextual information can be mined, including a learned profile of the user, which shows what the user likes, does, prefers, has done over time as a pattern, etc.[P-233]
Therefore, it would be obvious to combine Penilla’s teaching with Han’s teaching in order to enable a more effective optimized feedback according to a specific driver/operator of the vehicle.
In regards to claim 8, Han fails to teach causing the action to be performed includes initiating a call.
Penilla on the other hand teaches causing the action to be performed includes initiating a call (Paragraphs 9, 44, 116)
The methods, systems and apparatus are provided, which include processing systems for executing vehicle responses to voice input. In various configurations, a user's tone of voice is analyzed to determine matches in predefined tones. The tones, in some embodiments, are matched to voice profiles that determine or correlate to a selected vehicle response. The vehicle response to voice input can include, for example, making a setting, finding a map, finding directions, setting entertainment functions, looking up information, selecting a communication tool, making a call, sending a message, looking up a contact, looking up a calendar event, performing an Internet search, controlling a system of the vehicle, etc.[P-9]
The methods, systems and apparatus are provided, which include processing systems for executing vehicle responses to touch input. In various configurations, a user's touch characteristic is analyzed to determine matches in predefined touch characteristics. The touch characteristic, in some embodiments, is matched to touch profiles that determine or correlate to a selected vehicle response. The vehicle response to touch input can include, for example, making a setting, finding a map, finding directions, setting entertainment functions, looking up information, selecting a communication tool, making a call, sending a message, looking up a contact, looking up a calendar event, performing an Internet search, controlling a system of the vehicle, etc.[P-44]
The methods, systems and apparatus are provided, which include processing systems for executing vehicle responses to voice input. In various configurations, a user's tone of voice is analyzed to determine matches in predefined tones. The tones, in some embodiments, are matched to voice profiles that determine or correlate to a selected vehicle response. The vehicle response to voice input can include, for example, making a setting, finding a map, finding directions, setting entertainment functions, looking up information, selecting a communication tool, making a call, sending a message, looking up a contact, looking up a calendar event, performing an Internet search, controlling a system of the vehicle, etc. In general, the vehicle response is tailored to respond to the user's voice input in a way that respects or understands the user's possible mood or possible state of mind. For example, if the user's tone implies that the user is rushed, the system (e.g., vehicle electronics, software, cloud processing, and/or user connected devices) will process that tone in the voice and will provide a vehicle response in a more expedited manner, or without further queries. If the tone implies that the user is relaxed, the system may provide supplemental information in addition to responding to the voice input. For example, if the user asks for a dining spot near a park, the system may also recommend nearby coffee shops, discounts for parking, nearby valet parking, and/or promotions. However, if the user appears stressed or rushed, the supplemental information may be omitted and a response can be quick and to the point. For example, the response can be to show five restaurants near the park, and associated contact/map info, reservations links, or the like. For the relaxed inquiry, the system may attempt to refine the request and as, what type of food are you interested in, or identify coupons available for certain nearby restaurants, before providing a list of four restaurants near the park, and associated contact/map info, reservations links, or the like.[P-116]
Therefore, it would be obvious to combine Penilla’s teaching with Han’s teaching in order to enable a more effective optimized feedback according to a specific driver/operator of the vehicle.
In regards to claim 11, Han fails to teach causing the action to be performed includes causing music to be played through a speaker.
Penilla on the other hand teaches causing the action to be performed includes causing music to be played through a speaker (Paragraphs 17, 153, 307)
Optionally, the captured audio sample can be processed to remove noise, such as ambient noise, voice noise of other passengers, music playing in the vehicle, tapping noises, road noise, wind noise, etc. The audio sample is then processed to produce an audio signature. The audio signature may be in the form of an analog signal or digital code. The audio signature may identify certain frequencies in the spoken words, audio modulations, frequency peaks, peak-to-peak identifiable patterns, spikes, pauses, or other characteristics that can identify or distinguish when one spoken word, e.g., command, is said to have a particular tone. In some embodiments, in addition to voice input, other sensors can detect the magnitude of sensed touch inputs, physiological characterizes of the user's body, motions, demeaned, and combinations thereof.[P-17]
In further embodiments, mood may directly affect intensity of feedback. If Angry “turn DOWN the music,” then then vehicle lowers music by 10×, for example. If happy “turn down the music,” then vehicle lowers music by 3×.[P-153]
This type of interactive display and control provided in vehicles can assist vehicle makers to provide fewer buttons that are physical and reduce the cost and weight of a vehicle. In one example, the steering will may have configurable screen to allow the user to adjust the volume, lock or unlock the phone, change the music, access menu items, access the user's home, ask for help, change the dashboard style, set the configuration mode, and the like. As further shown, one of the inputs can be to simply toggle between one or more interaction modes.[P-307]
Therefore, it would be obvious to combine Penilla’s teaching with Han’s teaching in order to enable better customization settings on the vehicle screen.
In regards to claim 14, Han fails to teach the gesture is associated with a posture of at least a portion of the person.
Penilla on the other hand teaches the gesture is associated with a posture of at least a portion of the person (Paragraph 134)
Detecting emotional information can also use passive sensors which capture data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to the cues humans use to perceive emotions in others. For example, a video camera might capture facial expressions, body posture and gestures, while a microphone might capture speech. Other sensors can detect emotional cues by directly measuring physiological data, such as skin temperature and galvanic resistance. In some embodiments, a camera or IR camera can detect temperature changes in a person's skin. For instance, if a user is stressed, the blood rushing to a person's face may elevate the heat pattern or sensed heat from that person's face.[P-134]
Therefore, it would be obvious to combine Penilla’s teaching with Han’s teaching in order to enable more effective detection means for control gestures of the vehicle.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Han et al. (US 20150131857 A1) in view of Pisz et al. (CN 105556246 A)
In regards to claim 9, Han fails to teach causing the action to be performed includes disabling access to the vehicle.
Pisz on the other hand teaches causing the action to be performed includes disabling access to the vehicle.(Page 5, Paragraph 6; Page 7, Paragraph 1)
a user input subsystem 36 may include one or more input sensors, comprising a vehicle input sensor 60, an external input device, or both. vehicle input sensor 60 may include a configuration for one or more motion camera or other optical sensor detecting a gesture command, is configured to one or more touch sensor detects the touch command, is configured to one or more microphone detects voice command, or configured for other vehicular device detecting the user input. user input subsystem may also include an external input device, such as clips 62 and/or personal electronic device of the user 63, such as a tablet computer, smart phone, or other mobile device. [Pg 5, P-6]
In some cases, user access may be based on such users position confirming by user positioning subsystem 39. For example, the second or third row passengers can be allowed or prohibited to access various vehicle functions such as a navigation system. Optionally, with via access information associated with the user profile and not limited access associated with the user profile of the user may specify this setting. In some cases, user access can be based on a user profile subsystem 38 application by user identification and the combination of user position detected by the user positioning subsystem 39. For example, although, but has application by the user profile specified is not limited access privilege of the user or is capable of occupying the driver seat of the moving vehicle is prevented from accessing certain vehicle functions.[Pg 7, P-1]
Therefore, it would be obvious to combine Pisz’s teaching with Han’s teaching in order to enable a more reliable security interface regarding vehicle accessibility.
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Han et al. (US 20150131857 A1) in view of King et al. (DE 102013201746 A1)
In regards to claim 10, Han fails to teach causing the action to be performed includes modifying a zoom level of visible content displayed on a display.
King on the other hand teaches causing the action to be performed includes modifying a zoom level of visible content displayed on a display (Abstract; Page 5, Paragraph 1)
A gesture-based recognition system receives desired command inputs from a vehicle occupant by recognizing and interpreting his gestures. An image of the interior portion of the vehicle is captured and the image of the vehicle occupant is separated from the background in the captured image. The separated image is analyzed and a gesture recognition processor interprets the gesture of the vehicle occupant from the image. A command trigger plays the interpreted desired command along with a confirmation message for the vehicle occupant prior to triggering the command. When the vehicle occupant confirms, the command trigger triggers the interpreted command. Further, an interference engine processor judges the attention status of the vehicle occupant and transmits signals to a driver assistance system when the vehicle occupant is inattentive. The driver assistance system provides warning signals to inattentive vehicle occupants when identifying potential hazards. Further, upon detection of the driver, a driver recognition module restores a set of personalization functions of the vehicle to pre-stored settings.[Abstract]
This in 3 (b) The picture shown corresponds to a select / display function. To enable this feature, the vehicle occupant must position his index finger in the air and lightly press forward to mimic the actual press of a button or select an option. To initiate a selection within a particular area on a display screen, the vehicle occupant must point to it virtually with the index finger substantially in alignment with the area. For example, if the vehicle occupant desires to select a particular location on a displayed road map and zoom out to view areas around the location, he must virtually point to it with his fingers in the air in alignment with the location indicated. Pointing your finger at a specific virtual area, like in 3 (b) , activates selectable options in the appropriate direction projected forward to the screen. This gesture can be used for various choices, including selecting a particular song in a list, selecting a particular icon in a list displayed menu, exploring a place of interest on a displayed road map, etc.[Pg 5, P-1]
Therefore, it would be obvious to combine King’s teaching with Han’s teaching in order to enable better customization settings on the vehicle screen.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANTHONY D AFRIFA-KYEI whose telephone number is (571)270-7826. The examiner can normally be reached Monday-Friday 10am-7pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BRIAN ZIMMERMAN can be reached at 571-272-3059. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANTHONY D AFRIFA-KYEI/Examiner, Art Unit 2686
/BRIAN A ZIMMERMAN/Supervisory Patent Examiner, Art Unit 2686