Prosecution Insights
Last updated: April 19, 2026
Application No. 18/062,548

SYSTEM FOR RECOGNIZING GESTURE FOR VEHICLE AND METHOD FOR CONTROLLING THE SAME

Final Rejection §103
Filed
Dec 06, 2022
Examiner
SU, STEPHANIE T
Art Unit
3662
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Hyundai Mobis Co., Ltd.
OA Round
4 (Final)
69%
Grant Probability
Favorable
5-6
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
96 granted / 139 resolved
+17.1% vs TC avg
Strong +32% interview lift
Without
With
+32.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
35 currently pending
Career history
174
Total Applications
across all art units

Statute-Specific Performance

§101
18.5%
-21.5% vs TC avg
§103
51.6%
+11.6% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 139 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims This Office Action is in response to the claims filed on October 23, 2025. Claims 1-14 have been presented for examination. Claims 1-14 are currently rejected. Claims 1-5 and 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Imai (U.S. Patent Publication Number 2014/0292652) in view of Srail et al. (U.S. Patent No. 11,204,675), further in view of Jeppsson et al. (U.S. Patent Publication Number 2021/0103348). Claims 6-11 are rejected under 35 U.S.C. 103 as being unpatentable over Imai (U.S. Patent Publication Number 2014/0292652) in view of Srail et al. (U.S. Patent No. 11,204,675) and Jeppsson et al. (U.S. Patent Publication Number 2021/0103348), further in view of Eleftheriou et al. (U.S. Patent Publication Number 2014/0189569). Response to Argument 35 U.S.C. 103 Applicant's arguments filed on October 23, 2025 have been fully considered but they are not persuasive. The Applicant argues that Imai in view of Srail and Jeppsson does not disclose, teach, or suggest the elements of claim 1. Specifically, the Applicant appears to describe the disclosure of Jeppsson and argues that benchmark parameters of Jeppsson are “adjusted based on training from radar gestures,” which the Applicant states contrasts the claims, which requires that when the controller fails to recognize, via the gesture recognition means, the gesture of the driver and the driver operates at least two of the plurality of input means, the controller is trained to recognize the gesture of the driver based on the operation of the manipulation means. The Examiner has considered the arguments presented and respectfully disagrees. First, the Applicant appears to describe the disclosure of Jeppsson and contrasts the teachings to the claimed features without further articulating why the features of Jeppsson contrast with the claim elements. Further, the Applicant appears to merely conclude that the cited references do not disclose the claimed elements without producing contrary evidence establishing that the reference being relied on would not enable a skilled artisan to produce the recited limitations. See MPEP 2145. Therefore, the Applicant’s arguments are not persuasive. Second, even if contrary evidence was provided, Jeppsson in combination with Imai and Srail disclose the claim elements. Specifically, Jeppsson in at least ¶ 151 discloses that the gestures performed by the user perform manipulation of an interactive element. Jeppsson ¶¶ 48 and 89 also expressly discloses a radar system that recognizes gestures of a user, and indicating a failure of a radar gesture for a plurality of gesture attempts (see at least ¶¶ 25 and 89 “after the failed gesture attempts”), wherein one having ordinary skill in the art would recognize that the plural “failed gesture attempts” includes at least two of a plurality of input means attempted by the driver. Therefore, Jeppsson discloses “when the controller fails to recognize, via the gesture recognition means, the gesture of the driver and the driver operates at least two of the plurality of input means.” Jeppsson ¶ 89 further discloses that after the failed gesture attempts, a gesture-training module would receive data corresponding to the driver’s movement, which is an operation of manipulation means, which includes detected values of movement of the user to adjust parameters for the gesture to be recognized, wherein Jeppsson defines the adjusted parameters to be based on a machine-learned (i.e., trained) set of parameters, see ¶¶ 88-89. Thus, Jeppsson in combination with Imai and Srail discloses the claim limitation “wherein when the controller fails to recognize, via the gesture recognition means, the gesture of the driver and the driver operates at least two of the plurality of input means, the controller is trained to recognize the gesture of the driver based on the operation of the manipulation means.” Third, Jeppsson is analogous prior art to Imai. Imai and Jeppsson are analogous prior art because both references are directed to the same technological field of endeavor involving gesture recognition of driver to perform operation of a vehicle. See MPEP 2141.01(a). Further, both Imai and Jeppsson disclose receiving a driver operation of manipulation means (see input unit 110 disclosed by Imai in at least ¶ 31, and gesture recognition of Jeppsson in at least ¶ 89). It would have been obvious to one having ordinary skill in the art to have utilized the recognized gesture of Jeppsson in place of the driver input of Imai because a user gesture, such as the gestures in Jeppsson, constitute a driver input, and the substitution would enable the electronic device of Imai to accurately determine a reach or other radar gesture of the user (Jeppsson ¶ 3). Further, it would have been obvious to a person having ordinary skill in the art before the effective filing date to have combined the driver input of Imai with training a controller to recognize the gesture of the driver based on the operation of the manipulation means when the controller fails to recognize, via the gesture recognition means, the gesture of the driver and the driver operates at least two of the plurality of input means, as disclosed by Jeppsson, with reasonable expectation of success, to allow the electronic device and the gesture-training module 106 to learn to accept more variation in how users make radar gestures (Jeppsson ¶ 89). Therefore, Jeppsson in combination with Imai and Srail disclose the claimed elements. For these reasons, the Examiner maintains the prior art rejection. Additional citations from within the prior art of record are provided for further clarification. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Structure for the limitation “gesture recognition means” may be found in supporting paragraph 12 of the instant specification. Structure for the limitations “manipulation means” and “input means” may be found in paragraph 89 of the instant specification defining the manipulation means to include a plurality of physical buttons, each input means being independent buttons, respectively. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5 and 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Imai (U.S. Patent Publication Number 2014/0292652) in view of Srail et al. (U.S. Patent No. 11,204,675), further in view of Jeppsson et al. (U.S. Patent Publication Number 2021/0103348). Regarding claim 1, Imai discloses a system for recognizing a gesture in a vehicle, the system comprising: gesture recognition means having a sensor for recognizing a gesture of a driver; (Imai ¶ 31 discloses an input unit 110 having a plurality of sensors to detect, thereby recognizing, a gesture operation such as touching the surface with a finger by a user sitting in a driver’s seat, also see ¶ 32.) manipulation means having a plurality of input means for recognizing a touch of the driver; (Imai in at least Fig. 4 and 5 depict that the input unit 110 receive input including touching the surface of the input unit 110 with a finger in horizontal and vertical directions [i.e., a plurality of input means], also see at least ¶ 31.) a controller configured to receive a signal from the gesture recognition means and the manipulation means; and (Imai ¶ 31 discloses converting a sensor value detected by the input unit 110 into an operation input signal and outputting the signal to the control unit 120, wherein the sensor value is determined by touching the surface of the input unit 110.) wherein the controller is configured to recognize, via the gesture recognition means, the gesture of the driver by operation of the manipulation means, (Imai ¶ 34 discloses “The control unit 120 recognizes a gesture operation such as a slide and a flick (a swing-like touch) in a vertical direction as shown in FIG. 2 and FIG. 3 by using coordinate values of the finger U on the input unit 110.”) Imai does not expressly disclose: the gesture recognition means comprising an electromagnetic radiation-based sensor; display means for displaying a screen based on a signal from the controller; wherein when the controller fails to recognize, via the gesture recognition means, the gesture of the driver and the driver operates at least two of the plurality of input means, the controller is trained to recognize the gesture of the driver based on the operation of the manipulation means. However, Srail discloses: display means for displaying a screen based on a signal from the controller; (Srail Col. 4 Lines 10-18 discloses “the HMI 12 includes buttons on a steering wheel of the vehicle that are inputs to a driver information center displayed on the vehicle's instrument cluster,” wherein the HMI 12 includes a touch screen, wherein a controller circuit 14 anticipates a potential input between the user and the HMI 12, see Col. 5 Lines 19-21.) It would have been obvious to a person having ordinary skill in the art before the effective filing date to have utilized the controller of Imai in place of the controller of Srail with reasonable expectation of success because the substitution would result in “display means for displaying a screen based on a signal from the controller.” Further, it would have been obvious to a person having ordinary skill in the art before the effective filing date to have combined the display of Imai to expressly disclose displaying a screen as disclosed in Srail with reasonable expectation of success to enable a user to see and access a selection based on the input, see Col. 5 Lines 3-5 and MPEP 2143.01(G), rendering the limitation to be obvious. Jeppsson discloses: the gesture recognition means comprising an electromagnetic radiation-based sensor; (Jeppsson ¶ 22 discloses “a radar system that detects and determines radar-based touch-independent gestures (radar gestures) that are made by the user to interact with the electronic device and applications or programs running on the electronic device,” also see at least ¶ 30. One having ordinary skill in the art would recognize that a radar is an electromagnetic radiation-based sensor.) wherein when the controller fails to recognize, via the gesture recognition means, the gesture of the driver and the driver operates at least two of the plurality of input means (Jeppsson ¶ 89 discloses “after the failed gesture attempts,” wherein one having ordinary skill in the art would recognize that plural “attempts” as disclosed by Jeppsson indicates at least two inputs operated by the user), the controller is trained to recognize the gesture of the driver based on the operation of the manipulation means. (Jeppsson ¶ 89 discloses “The gesture-training module 106 may then receive radar data corresponding to the user's movement (e.g., after the failed gesture attempts) [i.e., at least two inputs] in the radar field and detect values of another set of parameters that are associated with the movement of the user,” such that “adjusted benchmark values are generated based on radar data that represents multiple attempts by the user to perform the gesture,” see ¶ 88. The gestures of the user manipulate the elements of a vehicle simulation environment, see ¶ 138, wherein Jeppsson defines the adjusted parameters to be based on a machine-learned (i.e., trained) set of parameters, see ¶ 89.) It would have been obvious to a person having ordinary skill in the art before the effective filing date to have utilized the radar disclosed by Jeppsson in place of the sensor of Imai with reasonable expectation of success because the substitution would result in the gestures of Imai being recognized by a radar. Further, it would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified the combination of Imai and Srail with training a controller to recognize a gesture of a driver when it fails to recognize the gesture, as disclosed by Jeppsson, with reasonable expectation of success, to allow the electronic device and the gesture-training module 106 to learn to accept more variation in how users make radar gestures (e.g., when the variation is consistent) (Jeppsson ¶ 89), rendering the modification to be obvious. Regarding claim 2, Imai in combination with Srail and Jeppsson discloses the system of claim 1, wherein: the sensor of the gesture recognition means is a camera sensor. (Jeppsson ¶ 82 discloses “The requested gesture can be a ... a camera-based touch-independent gesture,” and detecting the gesture from a camera, see ¶ 111) It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified the combination of Imai and Srail to incorporate that the sensor for gesture recognition is a camera sensor, as disclosed in Jeppsson, with reasonable expectation of success for increased accuracy (Jeppsson ¶ 69), rendering the modification to be obvious. Regarding claim 3, Imai in combination with Srail and Jeppsson discloses the system of claim 2, wherein: the manipulation means includes at least two input means arranged in a line, and (Imai ¶ 40 discloses “a start point S of user operation and an end point E as a final contact point [i.e., two input means],” wherein “the control unit 120 determines a user's gesture operation to be a straight line.”) when the at least two input means are touched in sequence, the controller is configured to recognize the touch of the at least two input means as a gesture signal. (Imai ¶ 60 discloses that the start point S is 300 milliseconds just before [i.e., in sequence] a final contact point of user operation which is end point E, and determining that the user’s gesture operation is a straight line, see ¶ 40 and at least Fig. 5.) Regarding claim 4, Imai in combination with Srail and Jeppsson discloses the system of claim 3, wherein: when the at least two input means are touched in sequence within a first time duration of 500 milliseconds (ms), (Imai ¶ 49 discloses “determination of slide operation and flick operations is made at a speed of 100 milliseconds before the end point E.” One having ordinary skill in the art would recognize that 100 milliseconds is within a time duration of 500 milliseconds. See corresponding Fig. 3.) the controller is configured to recognize the touch of the at least two input means as a directional gesture signal. (Imai ¶ 54 discloses “An arc design may be provided so as to cross along the vertical and horizontal directions of the input unit 110,” wherein “The control unit 120 recognizes a gesture operation,” see ¶ 34.) Regarding claim 5, Imai in combination with Srail and Jeppsson discloses the system of claim 4, wherein: when the ... input means are touched in sequence within the first time duration, the controller is configured to recognize the touch of the ... input means as the directional gesture signal. (Imai ¶ 49 discloses “determination of slide operation and flick operations is made at a speed of 100 milliseconds before the end point E.” One having ordinary skill in the art would recognize that 100 milliseconds is within a time duration of 500 milliseconds. See corresponding Fig. 3.) Imai does not expressly disclose: three input means the manipulation means includes at least three input means arranged in a line, and However, Srail discloses: three input means (see Srail Fig. 3) the manipulation means includes at least three input means arranged in a line, and (Srail Fig. 3, as provided and annotated below, depicts a first input, second input, and third input means that are arranged in a line, also see Col. 9 Lines 23-25.) PNG media_image1.png 461 611 media_image1.png Greyscale It would have been obvious to a person having ordinary skill in the art before the effective filing date to have combined the input of Imai with a third input as disclosed by Srail with reasonable expectation of success to increase selection options for the user, see MPEP 2143.01(G). Regarding claim 12, Imai discloses a system for recognizing a gesture equipped in a vehicle, the system comprising: manipulation means having a plurality of input means for recognizing a touch of a driver; (Imai ¶ 31 discloses an input unit 110 having a plurality of sensors to detect, thereby recognizing, a gesture operation such as touching the surface with a finger by a user sitting in a driver’s seat, also see ¶ 32.) a controller configured to receive a signal from the manipulation means; and (Imai ¶ 31 discloses converting a sensor value detected by the input unit 110 into an operation input signal and outputting the signal to the control unit 120, wherein the sensor value is determined by touching the surface of the input unit 110.) wherein, when the ... input means are touched in sequence within a first time duration of 500 milliseconds (ms), (Imai ¶ 49 discloses “determination of slide operation and flick operations is made at a speed of 100 milliseconds before the end point E.” One having ordinary skill in the art would recognize that 100 milliseconds is within a time duration of 500 milliseconds. See corresponding Fig. 3 depicting the input from start point S to end point E.) the controller is configured to recognize the touch of the at least three input means as a directional gesture signal, (Imai ¶ 54 discloses “An arc design may be provided so as to cross along the vertical and horizontal directions of the input unit 110,” wherein “The control unit 120 recognizes a gesture operation,” see ¶ 34.) wherein the directional gesture signal includes signals of a leftward direction, a rightward direction, an upward direction, and a downward direction, and (Imai ¶ 34 discloses “The control unit 120 recognizes a gesture operation such as a slide and a flick (a swing-like touch) in a vertical direction as shown in FIG. 2 and FIG. 3.” The moving direction of the user finger U may also be a horizontal direction, see at least ¶ 43 and Fig. 5.) wherein the controller is configured to recognize, via a gesture recognition means, the gesture of the driver by operation of the manipulation means. (Imai ¶ 34 discloses “The control unit 120 recognizes a gesture operation such as a slide and a flick (a swing-like touch) in a vertical direction as shown in FIG. 2 and FIG. 3 by using coordinate values of the finger U on the input unit 110.”) Imai does not expressly disclose: display means for displaying a screen based on a signal received from the controller, wherein the manipulation means includes at least three input means arranged in a line, the at least three input means However, Srail discloses: display means for displaying a screen based on a signal received from the controller, (Srail Col. 4 Lines 10-18 discloses “the HMI 12 includes buttons on a steering wheel of the vehicle that are inputs to a driver information center displayed on the vehicle's instrument cluster,” wherein the HMI 12 includes a touch screen.) wherein the manipulation means includes at least three input means arranged in a line, (See Srail Fig. 3 as annotated and provided above.) the at least three input means (see Srail Fig. 3) It would have been obvious to a person having ordinary skill in the art before the effective filing date to have utilized the controller of Imai in place of the controller of Srail with reasonable expectation of success because the substitution would result in “display means for displaying a screen based on a signal from the controller.” Further, it would have been obvious to a person having ordinary skill in the art before the effective filing date to have combined the display of Imai to expressly disclose displaying a screen and having a third input as disclosed in Srail with reasonable expectation of success to enable a user to see and access a selection based on the input and to have more selection options, see Col. 5 Lines 3-5 and MPEP 2143.01(G), rendering the limitation to be obvious. Jeppsson discloses: wherein when the controller fails to recognize, via the gesture recognition means, the gesture of the driver and the driver operates the at least three input means, the controller is trained to recognize the gesture of the driver based on the operation of the manipulation means. (Jeppsson ¶ 119 discloses that “Thus, the user's unsuccessful attempt to perform the requested gesture causes the electronic device 102 to repeat the visual element and the request,” wherein the gesture-training module 106 may present the first visual feedback element and instructions a number of times (e.g., one, three, five, or seven times). One having ordinary skill in the art would recognize that providing feedback three times indicates that the user had unsuccessfully performed a gesture three times, thereby providing three inputs.) It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified the combination of Imai and Srail to incorporate that the sensor for gesture recognition is a camera sensor, as disclosed in Jeppsson, with reasonable expectation of success for increased accuracy (Jeppsson ¶ 69), rendering the modification to be obvious. Regarding claim 13, Imai in combination with Srail and Jeppsson discloses a method of controlling the gesture recognition system of claim 5, the method comprising: (a) transmitting, by manipulation means, touch signals of a driver to the controller; (Imai ¶ 31 discloses “The input unit 110 is configured to detect a position or motion of the finger U as a gesture operation, convert a sensor value detected by the input unit 110 into an operation input signal, and output [i.e., transmit] the signal to the control unit 120.”) (b) identifying, by the controller, whether the touch signals of the driver are successively input; (Imai ¶ 60 discloses that the start point S is 300 milliseconds just before [i.e., successively] a final contact point of user operation which is end point E, and determining that the user’s gesture operation is a straight line, see ¶ 40 and at least Fig. 5, wherein the control unit 120 recognizes a gesture operation thereby performing the identifying, see ¶ 34.) (c) identifying, by the controller, whether the successive touch signals of the driver are different signals; and (Imai ¶ 34 discloses the control unit 120 recognizing a gesture operation, thereby performing the identifying, wherein the gesture operation is recognized by control unit 120, see ¶ 34, such that the gesture includes recognizing a start point S being a position that the user finger U touches the input unit 110, see ¶ 60, and the end point E is when the user operation has been completed, see ¶ 40.) (d) identifying, by the controller, whether the touch signals of the driver are input within a specific time duration. (Imai ¶ 60 discloses “time before the final contact point may be set to 200 to 400 milliseconds.”) Regarding claim 14, Imai in combination with Srail and Jeppsson discloses the method of claim 13, wherein: the controller is configured to recognize the touch signals of the driver as a gesture signal when a first condition that the touch signals of the driver are successively input at step (b), (Imai ¶ 60 discloses that the start point S is 300 milliseconds just before [i.e., successively] a final contact point of user operation which is end point E, and determining that the user’s gesture operation is a straight line, see ¶ 40 and at least Fig. 5, wherein the control unit 120 recognizes a gesture operation thereby performing the identifying, see ¶ 34.) a second condition that the successive touch signals of the driver correspond to the different touch signals at step (c), and (Imai ¶ 34 discloses the control unit 120 recognizing a gesture operation, thereby performing the identifying, wherein the gesture operation is recognized by control unit 120, see ¶ 34, such that the gesture includes recognizing a start point S being a position that the user finger U touches the input unit 110, see ¶ 60, and the end point E is when the user operation has been completed, see ¶ 40.) a third condition that the successive touch signals of the driver are input within the specific time duration at step (d) are satisfied. (Imai ¶ 60 discloses “time before the final contact point may be set to 200 to 400 milliseconds.”) Claims 6-11 are rejected under 35 U.S.C. 103 as being unpatentable over Imai (U.S. Patent Publication Number 2014/0292652) in view of Srail et al. (U.S. Patent No. 11,204,675) and Jeppsson et al. (U.S. Patent Publication Number 2021/0103348), further in view of Eleftheriou et al. (U.S. Patent Publication Number 2014/0189569). Regarding claim 6, Imai in combination with Srail, Jeppsson, and Eleftheriou discloses the system of claim 5, wherein: [inputs are touched] within a second time duration of 1000 milliseconds (ms), (Imai ¶ 60 discloses “time before the final contact point may be set to 200 to 400 milliseconds.”) The combination of Imai, Srail, and Jeppsson does not expressly disclose: the at least three input means comprises first input means, second input means, and third input means arranged in order, and when the at least three input means are touched in a first order ... the controller is configured to recognize the touch of the at least three input means as a functional gesture signal, wherein the first order is an order of the first input means, the second input means, the third input means, the second input means, and the first input means. However, Eleftheriou discloses: the at least three input means comprises first input means, second input means, and third input means arranged in order, and (Eleftheriou ¶ 59 discloses “The user, in this example, has tapped the word "yay." In the illustrated example, the user has inputted a first tap on "a" 122, a second tap on "a" 124 and a third tap on "y" 126 in the sensor detection space.”) when the at least three input means are touched in a first order ... the controller is configured to recognize the touch of the at least three input means as a functional gesture signal, (Eleftheriou ¶ 59 discloses “The user, in this example, has tapped the word "yay." In the illustrated example, the user has inputted a first tap on "a" 122, a second tap on "a" 124 and a third tap on "y" 126 in the sensor detection space,” wherein “recognized gestures can be used to control the functions of a typing controller,” see ¶ 28, such that the typing controller is included in a vehicle, see ¶ 133.) wherein the first order is an order of the first input means, the second input means, the third input means, the second input means, and the first input means. (Eleftheriou ¶ 59 discloses “The user, in this example, has tapped the word "yay." In the illustrated example, the user has inputted a first tap on "a" 122, a second tap on "a" 124 and a third tap on "y" 126 in the sensor detection space.” One having ordinary skill in the art would understand that the user typing a word such as “level” would require the user to input the first input, second input, third input, second input, and the first input.) It would have been obvious to a person having ordinary skill in the art before the effective filing date to have utilized the input of Eleftheriou in place of the input of Imai because the substitution would teach or suggest a “first order is an order of the first input means, the second input means, the third input means, the second input means, and the first input means” with reasonable expectation of success. Further, it would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified the combination of Imai and Srail to incorporate touch in a first order, as disclosed by Eleftheriou, with reasonable expectation of success to enable faster navigation between inputs, see Eleftheriou ¶ 52. Regarding claim 7, Imai in combination with Srail, Jeppsson, and Eleftheriou discloses the system of claim 6, wherein: the first input means, the second input means, and the third input means are arranged in a horizontal direction in the manipulation means. (Srail Fig. 3, as provided and annotated below, depicts a first input, second input, and third input means that may be arranged in a horizontal direction, also see Col. 9 Lines 23-25.) PNG media_image1.png 461 611 media_image1.png Greyscale It would have been obvious to a person having ordinary skill in the art before the effective filing date to have combined the input of Imai with a third input as disclosed by Srail with reasonable expectation of success to increase selection options for the user, see MPEP 2143.01(G). Regarding claim 8, Imai in combination with Srail, Jeppsson, and Eleftheriou discloses the system of claim 6, wherein: the first input means, the second input means, and the third input means are arranged in a vertical direction. (Srail Fig. 3, as provided and annotated below, depicts a first input, second input, and third input means that may be arranged in a vertical direction, also see Col. 9 Lines 23-25.) PNG media_image2.png 461 611 media_image2.png Greyscale It would have been obvious to a person having ordinary skill in the art before the effective filing date to have combined the input of Imai with a third input as disclosed by Srail with reasonable expectation of success to increase selection options for the user, see MPEP 2143.01(G). Regarding claim 9, Imai in combination with Srail, Jeppsson, and Eleftheriou discloses the system of claim 7, wherein: the manipulation means further includes fourth input means and fifth input means, and (Srail Fig. 3, as annotated and provided below, depicts a fourth input and a fifth input means with respect to the first, second, and third input means which may be arranged horizontally.) PNG media_image3.png 461 611 media_image3.png Greyscale the fourth input means is disposed above the second input means (see Srail annotated Fig. 3), and the fifth input means is disposed below the second input means (see annotated Srail Fig. 3), so that the fourth input means, the second input means, and the fifth input means are arranged in a line in a vertical direction (see annotated Srail Fig. 3 depicting a fourth input means disposed above the second input means located in the center, and a fifth input means disposed below the second input means, the fourth, second, and fifth input means being arranged in a line vertically. This configuration corresponds to Fig. 7 of the instant application depicting the fourth input means to be disposed above the second input means and the fifth input means to be disposed below the second input means, wherein the first and third input means would be disposed horizontally). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have combined the input of Imai with a third input as disclosed by Srail with reasonable expectation of success to increase selection options for the user, and because some control types are more suited to particular locations, and, conversely, particular locations are ideal for certain types of controls, such as to be more intuitive to a user, see “Human Factors Design Guidelines for Advanced Traveler Information Systems (ATIS)and Commercial Vehicle Operations (CVO)” and MPEP 2143.01(G). Regarding claim 10, Imai in combination with Srail, Jeppsson, and Eleftheriou discloses the system of claim 9, wherein: the functional gesture signal includes a signal for executing or cancelling a specific function, and (Imai ¶ 48 discloses that “a control signal corresponding to the operation quantity of the gesture operation is sent to the device (car navigation device) 130,” wherein the control signal may be for “moving down a display (a menu item or the like),” see ¶ 48.) the directional gesture signal includes signals of a leftward direction, a rightward direction, an upward direction, and a downward direction. (Imai ¶ 34 discloses “The control unit 120 recognizes a gesture operation such as a slide and a flick (a swing-like touch) in a vertical direction as shown in FIG. 2 and FIG. 3.” The moving direction of the user finger U may also be a horizontal direction, see at least ¶ 43 and Fig. 5.) Regarding claim 11, Imai in combination with Srail, Jeppsson, and Eleftheriou discloses the system of claim 10, wherein: the plurality of input means of the manipulation means recognize the touch of the driver via one of a capacitive touch scheme, a pressure-sensitive touch scheme, and a button scheme. (Imai ¶ 31 discloses “The input unit 110 is an electrostatic capacity type touch pad,” such that “a sensor value of the input unit 110 is changed by touching the surface of the input unit 110 with a finger U.”) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEPHANIE T SU whose telephone number is (571)272-5326. The examiner can normally be reached Monday to Friday, 9:30AM - 5:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANISS CHAD can be reached on (571)270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STEPHANIE T SU/Patent Examiner, Art Unit 3662
Read full office action

Prosecution Timeline

Dec 06, 2022
Application Filed
Oct 17, 2024
Non-Final Rejection — §103
Dec 20, 2024
Response Filed
Mar 11, 2025
Final Rejection — §103
May 05, 2025
Examiner Interview Summary
May 05, 2025
Applicant Interview (Telephonic)
May 28, 2025
Request for Continued Examination
Jun 02, 2025
Response after Non-Final Action
Jul 24, 2025
Non-Final Rejection — §103
Oct 23, 2025
Response Filed
Dec 02, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12542054
Managing Vehicle Behavior Based On Predicted Behavior Of Other Vehicles
2y 5m to grant Granted Feb 03, 2026
Patent 12539916
Method for Maneuvering a Vehicle
2y 5m to grant Granted Feb 03, 2026
Patent 12539859
CONTROL DEVICE FOR HYBRID VEHICLE
2y 5m to grant Granted Feb 03, 2026
Patent 12534082
VEHICLE FOR CONTROLLING REGENERATIVE BRAKING AND A METHOD OF CONTROLLING THE SAME
2y 5m to grant Granted Jan 27, 2026
Patent 12529575
SYSTEM AND METHOD FOR DETECTING ACTIVE ROAD WORK ZONES
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+32.3%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 139 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month