Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
This Office Action is in response to the applicant’s amendments and remarks filed on 12/19/2025. This action is made FINAL.
Claims 1-12 and 14-19 are pending for examination.
Regarding the objection(s) to claims 11 and 19, the examiner finds applicant’s amendment(s) to the claim(s) acceptable and withdraws objection(s) to the amended claim(s).
Regarding the rejection of claims 1-12 and 14-18 under 35 U.S.C. §112(b), applicant’s arguments are persuasive in view of applicant’s amendment to the claim(s), therefore the rejections are now withdrawn.
Regarding the rejection of claim 19 under 35 U.S.C §112(b), applicant’s arguments have been fully considered and are not persuasive. Specifically, amendments (similar to claim 11) to cure claim 19 of the indefiniteness of scope were not made.
Regarding the rejection of claims 1-12 and 14-19 under 35 U.S.C. §103, applicant’s arguments have been considered but are deemed moot in view of the new grounds of rejection necessitated by applicant’s amendment, outlined below.
Specifically, while Tetsuya teaches a part of the amended limitations such as recognizing the touch sensors of the joystick levers at different points in time (Tetsuya, FIG. 1; FIG. 2; FIG. 3; FIG. 4; Page 3, lines 9-20, 32-35; Page 4, lines 21-28; Page 5, lines 37-43; Page 6, lines 1-6) Tetsuya fails to disclose all the amended limitations. Previously cited prior art also fails to teach this limitation, therefore applicant’s amendments necessitated the new grounds of rejection in view of a new prior art, outlined below.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 19 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 19 recites the phrase "wherein the lever set to the function inactivation state among the first and second joystick levers through the final step is configured to follow or not follow operation of a lever set to the function activation state among the first and second joystick levers", rendering the claim indefinite because the scope of the claim is unclear. That is, the inactivation state follows or does not follow the activation state. There is only following or not following and the claim recites both, so it is unclear how the scope of the invention is further limited.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-12, 14, and 17-19 are rejected under 35 U.S.C. 103 as being obvious over Tetsuya (English Translation of JP 2006273203 A) in view of Masu (US 20190106141 A1) and Papendieck et al. (English Translation of WO 2020193142 A1), henceforth known as Tetsuya, Masu, and Papendieck respectively.
Tetsuya and Masu were first cited in a previous office action.
Regarding claim 1, Tetsuya discloses:
A method of controlling operation of an integrated control apparatus for an […] vehicle, the method comprising:
(Tetsuya, FIG. 1; FIG. 2; FIG. 3; FIG. 4; Page 3, lines 9-20;
Where the moving body driving apparatus is implemented in a vehicle)
a gripping step of recognizing a user's gripping of a first joystick lever and a second joystick lever provided in the […] vehicle with both hands of the user, respectively, wherein the first joystick lever and second joystick lever are each recognized at time points different from each other; and
(Tetsuya, FIG. 1; FIG. 2; FIG. 3; FIG. 4; Page 3, lines 9-20, 32-35; Page 4, lines 21-28; Page 5, lines 37-43; Page 6, lines 1-6;
Where the moving body driving apparatus detects when the driver grips operation members 34L and 34R (a gripping step of recognizing a user's gripping of a first joystick lever and a second joystick lever) implemented in the vehicle (provided in the […] vehicle) with right and left hands (with both hands of the user, respectively), where the driver grips operation members 34L and 34R at different points in time and the members are recognized at different points in time (wherein the first joystick lever and second joystick lever are each recognized at time points different from each other))
a setting step of determining a touch sensor recognized [priorly in time] between a touch sensor provided in the first joystick lever and a touch sensor provided in the second joystick lever [based on recognition of both the touch sensors at time points different from each other], setting the lever including the touch sensor recognized [priorly in time] to a function activation state among the first and second joystick levers, and setting the other lever to a function inactivation state among the first and second joystick levers.
(Tetsuya, FIG. 2; FIG. 3; FIG. 4; Page 4, lines 21-28; Page 5, lines 36-43; Page 6, lines 1-6, 11-13, 16-17, 23-25, 28-29;
Where in step ST02 the contact signals of the left and right contact sensors on operation members 34L and 34R are read, and in Step ST03 it is determined whether the driver touches the touch sensor in the left operation member 34L or the right operation member 34R (a setting step of determining a touch sensor recognized […] between a touch sensor provided in the first joystick lever and a touch sensor provided in the second joystick lever […]), where when the left operation member is touched the method proceeds to step ST04 and step ST05 wherein the left operation member 34L is used for control, and when the right operation member is touched the method proceeds to step ST09 and step ST10 wherein the right operation member 34R is used for control (setting the lever including the touch sensor recognized […] to a function activation state among the first and second joystick levers) , and the member that is not touched is not used for control (and setting the other lever to a function inactivation state among the first and second joystick levers)).
Tetsuya is silent on the following limitations, bolded for emphasis. However, in the same field of endeavor, Masu teaches:
A method of controlling operation of an integrated control apparatus for an autonomous vehicle, the method comprising:
(Masu, FIG. 1; FIG. 6; ¶[0023]-¶[0024]; ¶[0031]; ¶[0040]; ¶[0005]; ¶[0069];
Where the seat 1 includes a pair of steering levers (A method of controlling operation of an integrated control apparatus) and is implemented in a vehicle that can perform automatic driving (for an autonomous vehicle))
a gripping step of recognizing a user's gripping of a first joystick lever and a second joystick lever provided in the autonomous vehicle with both hands of the user, respectively…; and
(Masu, FIG. 1; FIG. 6; ¶[0031];
Where the seat 1 includes a pair of steering levers that contains sensors which determine the presence or absence of the driver operation of the steering levers 40A and 40B(a gripping step of recognizing a user's gripping of a first joystick lever and a second joystick lever) provided in the automated vehicle with both hands (provided in the autonomous vehicle with both hands of the user, respectively)).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the invention of Tetsuya with the features taught by Masu because “The present invention was made in view of such conventional problems, and is directed to providing a steering device which enables a sitter to instantly respond to a situation in which a driving state of a vehicle is changed from automatic driving to manual driving and is out of the way when an occupant does not need to operate the steering device” (Masu, ¶[0006). That is, the features of Masu integrate the dual lever steering control system into an autonomous vehicle.
Further, combining the invention of Tetsuya with the features taught by Masu is an example of Combining prior art elements according to known methods to yield predictable results (combining the levers of Tetsuya with the vehicle of Masu) - see MPEP §2143 A, or Simple substitution of one known element for another to obtain predictable results (substituting the manual vehicle of Tetsuya with the automatic vehicle of Masu)- see MPEP §2143 B, since automated vehicles are desirable for the automated driving functions that alleviate the burden on the driver, allowing the driver to relax (Masu, ¶[0061]).
Tetsuya and Masu are silent on the following limitations, bolded for emphasis. However, in the same field of endeavor, Papendieck teaches:
a setting step of determining a touch sensor recognized priorly in time between a touch sensor provided in the first… and a touch sensor provided in the second… based on recognition of both the touch sensors at time points different from each other, setting… the touch sensor recognized priorly in time to a function activation state…, and setting the other… to a function inactivation state…
(Papendieck,
pages 3-4 : “…The device according to the invention for outputting a parameter value in a vehicle comprises a detection unit with a first and a second detection area, for each of which a detection state and a blocking state can be activated. The input of the user can be detected by a detection area in the detection state, whereas no input can be detected by a detection area in the blocked state. It further comprises a control unit which is set up to control the detection unit in such a way that after an input has been detected in the first detection area for a specific blocking time interval, the blocking state for the second detection area is activated… In one embodiment, the first and the second detection area comprise surface areas on a surface of a detection unit. In particular, they are designed as areas of a touch-sensitive surface of the detection unit. As a result, operating elements can advantageously be used which can be integrated particularly well in the vehicle”;
Where a first touch sensor in a first area is recognized before a second touch sensor in a second area (a setting step of determining a touch sensor recognized priorly in time between a touch sensor provided in the first… and a touch sensor provided in the second…) when the touch sensors are recognized at different points within a time interval (based on recognition of both the touch sensors at time points different from each other) and where the first touch sensor recognized as being touched first is set to an active state while the second touch sensor is blocked for the time interval (setting… the touch sensor recognized priorly in time to a function activation state…, and setting the other… to a function inactivation state)).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the invention of Tetsuya and Masu with the features taught by Papendieck because “…This advantageously prevents the entry in the first detection area from being accidentally actuated in the second detection area.” (Papendieck, page 4).
Regarding claim 2, Tetsuya, Masu, and Papendieck teach the method of claim 1. Tetsuya further discloses:
further including an information display step of transmitting information on the first joystick lever or the second joystick lever set to the function activation state to the user using at least one of a lever LED, a haptic motor, or a display.
(Tetsuya, FIG. 2; FIG. 3; FIG. 4; Page 4, lines 32-41; Page 3, lines 32-35;
Where the contacted operation member provides a reactive force for either the left operation member or the right operation member (further including an information display step of transmitting information on the first joystick lever or the second joystick lever) designated as the operation lever in use (set to the function activation state) to provide haptic feedback to the driver using a reaction force motor 43R or 43L, depending on the lever (to the user using at least one of a lever LED, a haptic motor, or a display)).
Regarding claim 3, Tetsuya, Masu, and Papendieck teach the method of claim 2. Tetsuya further discloses:
further including: a first determination step of determining whether one or more individual sensors in the touch sensor provided in the first joystick lever maintain touch after the first joystick lever is set to the function activation state and the second joystick lever is set to the function inactivation state through the setting step; and
(Tetsuya, FIG. 2; FIG. 3; FIG. 4; FIG. 7; Page 4, lines 21-28; Page 5, line 37-Page 6, line 6; Page 6, lines 11-15, 23-27, 36-40; Page 9, lines 7-11, 23-27;
Where the moving body driving apparatus determines whether the touch contact is maintained for the right or left operation member (further including: a first determination step of determining whether one or more individual sensors in the touch sensor provided in the first joystick lever maintain touch) in order to determine whether the driver changes from right operation member to left operation member or vice versa (after the first joystick lever is set to the function activation state and the second joystick lever is set to the function inactivation state through the setting step))
a first change step of changing the first joystick lever to the function inactivation state when the one or more individual sensors in the first joystick lever do not maintain the touch as a result of the determining in the first determination step.
(Tetsuya, FIG. 2; FIG. 3; FIG. 4; FIG. 7; Page 4, lines 21-28; Page 5, line 37-Page 6, line 10; Page 6, lines 11-15, 23-27, 36-40; Page 9, lines 7-11, 23-27;
Where the moving body driving apparatus determines an operation member is not in the operation state, i.e. changes to inactive (a first change step of changing the first joystick lever to the function inactivation state) when the contact signals from the respective contact sensors are not present (when the one or more individual sensors in the first joystick lever do not maintain the touch as a result of the determining in the first determination step)).
Regarding claim 4, Tetsuya, Masu, and Papendieck teach the method of claim 3. Tetsuya further discloses:
wherein the information display step is maintained when the one or more individual sensors in the first joystick lever maintain the touch as a result of the determining in the first determination step.
(Tetsuya, FIG. 2; FIG. 3; FIG. 4; Page 4, lines 32-41; Page 3, lines 32-35;
Where the reactive force is provided (wherein the information display step is maintained) based on the driver’s operation of the operation member, requiring maintaining contact with the operation member’s contact sensors (when the one or more individual sensors in the first joystick lever maintain the touch as a result of the determining in the first determination step)).
Regarding claim 5, Tetsuya, Masu, and Papendieck teach the method of claim 3. Tetsuya further discloses:
further including a second determination step of determining whether one or more individual sensors in the touch sensor provided in the second joystick lever recognize a touch after the first change step,
(Tetsuya, FIG. 2; FIG. 3; FIG. 4; FIG. 7; Page 4, lines 21-28; Page 5, line 37-Page 6, line 6; Page 6, lines 11-15, 23-27, 36-40; Page 9, lines 7-11, 23-27;
Where the moving body driving apparatus determines whether the touch contact is maintained for the right or left operation member (further including a second determination step of determining whether one or more individual sensors in the touch sensor provided in the second joystick lever recognize a touch after the first change step))
wherein, when the one or more individual sensors in the second joystick lever recognize the touch as a result of the determining in the second determination step, the second joystick lever is changed to the function activation state and the first joystick lever is maintained in the function inactivation state.
(Tetsuya, FIG. 2; FIG. 3; FIG. 4; FIG. 7; Page 4, lines 21-28; Page 5, line 37-Page 6, line 6; Page 6, lines 11-15, 23-27, 36-40; Page 9, lines 7-11, 23-27;
Where the moving body driving apparatus determines the contact signal is present in the other operation lever (wherein, when the one or more individual sensors in the second joystick lever recognize the touch as a result of the determining in the second determination step), the operation lever with the contact signal present is placed in operation and the operation lever without a contact signal is not in the operation state (the second joystick lever is changed to the function activation state and the first joystick lever is maintained in the function inactivation state)).
Regarding claim 6, Tetsuya, Masu, and Papendieck teach the method of claim 5. Tetsuya further discloses:
wherein the first change step is performed when the one or more individual sensors in the second joystick lever do not recognize the touch as a result of the determining in the second determination step.
(Tetsuya, FIG. 2; FIG. 3; FIG. 4; FIG. 7; Page 4, lines 21-28; Page 5, line 37-Page 6, line 10; Page 6, lines 11-15, 23-27, 36-40; Page 9, lines 7-11, 23-27;
Where the moving body driving apparatus determines an operation member is not in the operation state, i.e. changes to inactive (wherein the first change step is performed) when the contact signals from the respective contact sensors are not present (when the one or more individual sensors in the second joystick lever do not recognize the touch as a result of the determining in the second determination step)).
Regarding claim 7, Tetsuya, Masu, and Papendieck teach the method of claim 5. Tetsuya further discloses:
further including: a third determination step of determining whether one or more individual sensors in the touch sensor provided in the second joystick lever maintain touch after the second joystick lever is set to the function activation state and the first joystick lever is set to the function inactivation state through the setting step; and
(Tetsuya, FIG. 2; FIG. 3; FIG. 4; FIG. 7; Page 4, lines 21-28; Page 5, line 37-Page 6, line 6; Page 6, lines 11-15, 23-27, 36-40; Page 9, lines 7-11, 23-27;
Where the moving body driving apparatus determines whether the touch contact is maintained for the right or left operation member (further including: a third determination step of determining whether one or more individual sensors in the touch sensor provided in the second joystick lever maintain touch) in order to determine whether the driver changes from right operation member to left operation member or vice versa (after the second joystick lever is set to the function activation state and the first joystick lever is set to the function inactivation state through the setting step))
a second change step of changing the second joystick lever to the function inactivation state when the one or more individual sensors in the second joystick lever do not maintain the touch as a result of the determining in the third determination step.
(Tetsuya, FIG. 2; FIG. 3; FIG. 4; FIG. 7; Page 4, lines 21-28; Page 5, line 37-Page 6, line 10; Page 6, lines 11-15, 23-27, 36-40; Page 9, lines 7-11, 23-27;
Where the moving body driving apparatus determines an operation member is not in the operation state, i.e. changes to inactive (a second change step of changing the second joystick lever to the function inactivation state) when the contact signals from the respective contact sensors are not present (when the one or more individual sensors in the second joystick lever do not maintain the touch as a result of the determining in the third determination step)).
Regarding claim 8, Tetsuya, Masu, and Papendieck teach the method of claim 7. Tetsuya further discloses:
wherein the information display step is maintained when the one or more individual sensors in the second joystick lever maintain the touch as a result of the determining in the third determination step.
(Tetsuya, FIG. 2; FIG. 3; FIG. 4; Page 4, lines 32-41; Page 3, lines 32-35;
Where the reactive force is provided (wherein the information display step is maintained) based on the driver’s operation of the operation member, requiring maintaining contact with the operation member’s contact sensors (when the one or more individual sensors in the second joystick lever maintain the touch as a result of the determining in the third determination step)).
Regarding claim 9, Tetsuya, Masu, and Papendieck teach the method of claim 7. Tetsuya further discloses:
further including a fourth determination step of determining whether one or more individual sensors in the touch sensor provided in the first joystick lever recognize a touch after the second change step,
(Tetsuya, FIG. 2; FIG. 3; FIG. 4; FIG. 7; Page 4, lines 21-28; Page 5, line 37-Page 6, line 6; Page 6, lines 11-15, 23-27, 36-40; Page 9, lines 7-11, 23-27;
Where the moving body driving apparatus determines whether the touch contact is maintained for the right or left operation member (further including a fourth determination step of determining whether one or more individual sensors in the touch sensor provided in the first joystick lever recognize a touch) in order to determine whether the driver changes from right operation member to left operation member or vice versa (after the second change step))
wherein, when the one or more individual sensors in the first joystick lever recognize the touch as a result of the determining in the fourth determination step, the first joystick lever is changed to the function activation state and the second joystick lever is maintained in the function inactivation state.
(Tetsuya, FIG. 2; FIG. 3; FIG. 4; FIG. 7; Page 4, lines 21-28; Page 5, line 37-Page 6, line 10; Page 6, lines 11-15, 23-27, 36-40; Page 9, lines 7-11, 23-27;
Where when the contact signals from the respective contact sensors in the operation levers are not present (wherein, when the one or more individual sensors in the first joystick lever recognize the touch as a result of the determining in the fourth determination step) the moving body driving apparatus determines an operation member is not in the operation state, i.e. changes to inactive and maintains the operation state of the operation member with the contact signal (the first joystick lever is changed to the function activation state and the second joystick lever is maintained in the function inactivation state)).
Regarding claim 10, Tetsuya, Masu, and Papendieck teach the method of claim 9. Tetsuya further discloses:
wherein, when the one or more individual sensors in the first joystick lever do not recognize the touch as a result of the determining in the fourth determination step, the second change step is performed.
(Tetsuya, FIG. 2; FIG. 3; FIG. 4; FIG. 7; Page 4, lines 21-28; Page 5, line 37-Page 6, line 10; Page 6, lines 11-15, 23-27, 36-40; Page 9, lines 7-11, 23-27;
Where when the contact signals from the respective contact sensors in the operation levers are not present (wherein, when the one or more individual sensors in the first joystick lever do not recognize the touch as a result of the determining in the fourth determination step) the moving body driving apparatus determines an operation member is not in the operation state, i.e. changes to inactive and maintains the operation state of the operation member with the contact signal (the second change step is performed)).
Regarding claim 11, Tetsuya, Masu, and Papendieck teach the method of claim 1. Tetsuya discloses the lever set to the function inactivation state among the first and second joystick levers (Tetsuya, FIG. 2; FIG. 3; FIG. 4; FIG. 7; Page 4, lines 21-28; Page 5, line 37-Page 6, line 10; Page 6, lines 11-15). Masu further teaches:
wherein the lever… among the first and second joystick levers is configured to be changed to follow operation of a lever set to the function activation state among the first and second joystick levers.
(Masu, FIG. 1; ¶[0040];
Where the pair of steering levers 40A and 40B are used in a pushing/pulling motion to control vehicle steering, thereby one lever is linked and follows operation of the other lever).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the invention of Tetsuya and Papendieck with the features taught by Masu because “…The present invention was made in view of such conventional problems, and is directed to providing a steering device which enables a sitter to instantly respond to a situation in which a driving state of a vehicle is changed from automatic driving to manual driving” (Masu, ¶[0006]).
Regarding claim 12, Tetsuya discloses:
A method of controlling operation of an integrated control apparatus for […] vehicles, the method comprising:
(Tetsuya, FIG. 1; FIG. 2; FIG. 3; FIG. 4; Page 3, lines 9-20;
Where the moving body driving apparatus is implemented in a vehicle)
a gripping step of recognizing a user's gripping of a first joystick lever and a second joystick lever provided in an […] vehicle with both hands of the user, respectively, wherein the first joystick lever and second joystick lever are each recognized at time points different from each other;
(Tetsuya, FIG. 1; FIG. 2; FIG. 3; FIG. 4; Page 3, lines 9-20, 32-35; Page 4, lines 21-28; Page 5, lines 37-43; Page 6, lines 1-6;
Where the moving body driving apparatus detects when the driver grips operation members 34L and 34R (a gripping step of recognizing a user's gripping of a first joystick lever and a second joystick lever) implemented in the vehicle (provided in an […] vehicle) with right and left hands (with both hands of the user, respectively) where the driver grips operation members 34L and 34R at different points in time and the members are recognized at different points in time (wherein the first joystick lever and second joystick lever are each recognized at time points different from each other))
a setting step of determining a touch sensor recognized [priorly in time] between a touch sensor provided in the first joystick lever and a touch sensor provided in the second joystick lever [based on recognition of both the touch sensors at time points different from each other], setting the lever including the touch sensor recognized [priorly in time] to the function activation state among the first and second joystick levers, and setting the other lever to the function inactivation state among the first and second joystick levers;
(Tetsuya, FIG. 2; FIG. 3; FIG. 4; Page 4, lines 21-28; Page 5, lines 36-43; Page 6, lines 1-6, 11-13, 16-17, 23-25, 28-29;
Where in step ST02 the contact signals of the left and right contact sensors on operation members 34L and 34R are read, and in Step ST03 it is determined whether the driver touches the touch sensor in the left operation member 34L or the right operation member 34R (a setting step of determining a touch sensor recognized [… in the second joystick lever […]), where when the left operation member is touched the method proceeds to step ST04 and step ST05 wherein the left operation member 34L is used for control, and when the right operation member is touched the method proceeds to step ST09 and step ST10 wherein the right operation member 34R is used for control (setting the lever including the touch sensor recognized […] to the function activation state among the first and second joystick levers), and the member that is not touched is not used for control (and setting the other lever to the function inactivation state among the first and second joystick levers))
a comparison step of comparing a steering operation force of the first joystick lever with a steering operation force of the second joystick lever when the user performs steering operation using the first joystick lever and the second joystick lever together; and
(Tetsuya, FIG. 1; FIG. 2; FIG. 3; FIG. 4; Page 3, lines 9-14, 32-35; Page 4, lines 21-31; Page 5, lines 37-43; Page 6, lines 1-10, 36-43; Page 7, lines 1-4, 6-7;
Where in step ST16 the moving body apparatus determines whether the left operation amount Qle is larger than the right operation amount Qri (a comparison step of comparing a steering operation force of the first joystick lever with a steering operation force of the second joystick lever) when the driver operates both the left and right operation members 34L and 34R at the same time (when the user performs steering operation using the first joystick lever and the second joystick lever together))
a final step of setting the lever with the larger steering operation force among the first and second joystick levers to a function activation state and setting the other lever among the first and second joystick levers to a function inactivation state.
(Tetsuya, FIG. 1; FIG. 2; FIG. 3; FIG. 4; Page 3, lines 9-14, 32-35; Page 4, lines 21-31; Page 5, lines 37-43; Page 6, lines 1-10, 36-43; Page 7, lines 1-4, 6-7;
Where in steps ST17 or ST19, the moving body apparatus sets the larger operation amount of Qle or Qri to be the operating member in use (a final step of setting the lever with the larger steering operation force among the first and second joystick levers to a function activation state) and does not use the smaller operation amount operating member (and setting the other lever among the first and second joystick levers to a function inactivation state)).
Tetsuya is silent on the following limitations, bolded for emphasis. However, in the same field of endeavor, Masu teaches:
A method of controlling operation of an integrated control apparatus for autonomous vehicles, the method comprising:
(Masu, FIG. 1; FIG. 6; ¶[0023]-¶[0024]; ¶[0031]; ¶[0040]; ¶[0005]; ¶[0069];
Where the seat 1 includes a pair of steering levers (A method of controlling operation of an integrated control apparatus) and is implemented in a vehicle that can perform automatic driving (for autonomous vehicles))
a gripping step of recognizing a user's gripping of a first joystick lever and a second joystick lever provided in an autonomous vehicle with both hands of the user, respectively…;
(Masu, FIG. 1; FIG. 6; ¶[0031];
Where the seat 1 includes a pair of steering levers that contains sensors which determine the presence or absence of the driver operation of the steering levers 40A and 40B(a gripping step of recognizing a user's gripping of a first joystick lever and a second joystick lever) provided in the automated vehicle with both hands (provided in an autonomous vehicle with both hands of the user, respectively)).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the invention of Tetsuya with the features taught by Masu because “The present invention was made in view of such conventional problems, and is directed to providing a steering device which enables a sitter to instantly respond to a situation in which a driving state of a vehicle is changed from automatic driving to manual driving and is out of the way when an occupant does not need to operate the steering device” (Masu, ¶[0006). That is, the features of Masu integrate the dual lever steering control system into an autonomous vehicle.
Further, combining the invention of Tetsuya with the features taught by Masu is an example of Combining prior art elements according to known methods to yield predictable results (combining the levers of Tetsuya with the vehicle of Masu) - see MPEP §2143 A, or Simple substitution of one known element for another to obtain predictable results (substituting the manual vehicle of Tetsuya with the automatic vehicle of Masu)- see MPEP §2143 B, since automated vehicles are desirable for the automated driving functions that alleviate the burden on the driver, allowing the driver to relax (Masu, ¶[0061]).
Tetsuya and Masu are silent on the following limitations, bolded for emphasis. However, in the same field of endeavor, Papendieck teaches:
a setting step of determining a touch sensor recognized priorly in time between a touch sensor provided in the first… and a touch sensor provided in the second… based on recognition of both the touch sensors at time points different from each other, setting… the touch sensor recognized priorly in time to the function activation state…, and setting the other… to the function inactivation…
(Papendieck,
pages 3-4 : “…The device according to the invention for outputting a parameter value in a vehicle comprises a detection unit with a first and a second detection area, for each of which a detection state and a blocking state can be activated. The input of the user can be detected by a detection area in the detection state, whereas no input can be detected by a detection area in the blocked state. It further comprises a control unit which is set up to control the detection unit in such a way that after an input has been detected in the first detection area for a specific blocking time interval, the blocking state for the second detection area is activated… In one embodiment, the first and the second detection area comprise surface areas on a surface of a detection unit. In particular, they are designed as areas of a touch-sensitive surface of the detection unit. As a result, operating elements can advantageously be used which can be integrated particularly well in the vehicle”;
Where a first touch sensor in a first area is recognized before a second touch sensor in a second area (a setting step of determining a touch sensor recognized priorly in time between a touch sensor provided in the first… and a touch sensor provided in the second…) when the touch sensors are recognized at different points within a time interval (based on recognition of both the touch sensors at time points different from each other) and where the first touch sensor recognized as being touched first is set to an active state while the second touch sensor is blocked for the time interval (setting… the touch sensor recognized priorly in time to the function activation state…, and setting the other… to the function inactivation)).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the invention of Tetsuya and Masu with the features taught by Papendieck because “…This advantageously prevents the entry in the first detection area from being accidentally actuated in the second detection area.” (Papendieck, page 4).
Regarding claim 14, Tetsuya, Masu, and Papendieck teach the method of claim 12. Tetsuya further discloses:
further including an information display step of transmitting information on the first joystick lever or the second joystick lever set to the function activation state through the setting step to the user using at least one of a lever LED, a haptic motor, or a display.
(Tetsuya, FIG. 2; FIG. 3; FIG. 4; Page 4, lines 32-41; Page 3, lines 32-35;
Where the contacted operation member provides a reactive force for either the left operation member or the right operation member (further including an information display step of transmitting information on the first joystick lever or the second joystick lever) designated as the operation lever in use based on the contacted operation member (set to the function activation state through the setting step) to provide haptic feedback to the driver using a reaction force motor 43R or 43L, depending on the lever (to the user using at least one of a lever LED, a haptic motor, or a display)).
Regarding claim 17, Tetsuya, Masu, and Papendieck teach the method of claim 12. Tetsuya further discloses:
wherein, when the steering operation force of the first joystick lever is greater than the steering operation force of the second joystick lever as a result of the determining in the comparison step, the first joystick lever is set to the function activation state and the second joystick lever is set to the function inactivation state in the final step.
(Tetsuya, FIG. 1; FIG. 2; FIG. 3; FIG. 4; Page 3, lines 9-14, 32-35; Page 4, lines 21-31; Page 5, lines 37-43; Page 6, lines 1-10, 36-43; Page 7, lines 1-4, 6-7;
Where in steps ST17 or ST19, the moving body apparatus sets the larger operation amount of Qle or Qri to be the operating member in use (wherein, when the steering operation force of the first joystick lever is greater than the steering operation force of the second joystick lever as a result of the determining in the comparison step, the first joystick lever is set to the function activation state) and does not use the smaller operation amount operating member (and the second joystick lever is set to the function inactivation state in the final step)).
Regarding claim 18, Tetsuya, Masu, and Papendieck teach the method of claim 12. Tetsuya further discloses:
wherein, when that the steering operation force of the second joystick lever is greater than the steering operation force of the first joystick lever as a result of the determining in the comparison step, the second joystick lever is set to the function activation state and the first joystick lever is set to the function inactivation state in the final step.
(Tetsuya, FIG. 1; FIG. 2; FIG. 3; FIG. 4; Page 3, lines 9-14, 32-35; Page 4, lines 21-31; Page 5, lines 37-43; Page 6, lines 1-10, 36-43; Page 7, lines 1-4, 6-7;
Where in steps ST17 or ST19, the moving body apparatus sets the larger operation amount of Qle or Qri to be the operating member in use (wherein, when that the steering operation force of the second joystick lever is greater than the steering operation force of the first joystick lever as a result of the determining in the comparison step, the second joystick lever is set to the function activation state) and does not use the smaller operation amount operating member (and the first joystick lever is set to the function inactivation state in the final step)).
Regarding claim 19, Tetsuya, Masu, and Papendieck teach the method of claim 12. Tetsuya further discloses:
wherein the lever set to the function inactivation state among the first and second joystick levers through the final step is configured to follow or not follow operation of a lever set to the function activation state among the first and second joystick levers.
(Tetsuya, FIG. 2; FIG. 3; FIG. 4; FIG. 7; Page 4, lines 21-28; Page 5, line 37-Page 6, line 10; Page 6, lines 11-15, 23-27, 36-40; Page 9, lines 7-11, 23-27;
Where the inactive lever of the right operating member and the left operating member (wherein the lever set to the function inactivation state among the first and second joystick levers through the final step) does not follow the operated member (is configured to follow or not follow operation of a lever set to the function activation state among the first and second joystick levers)).
Claims 15 and 16 are rejected under 35 U.S.C. 103 as being obvious over Tetsuya, Masu, and Papendieck as applied to claim 12, above, and in further view of Sugitani et al. (US 20040003954 A1), henceforth known as Sugitani.
Regarding claim 15, Tetsuya, Masu, and Papendieck teach the method of claim 12. Tetsuya further discloses:
wherein the steering operation force of the first joystick lever and the steering operation force of the second joystick lever are measured by respective […] sensors, measured values of the steering operation forces are transmitted to a main printed circuit board (PCB), and the main PCB is configured to determine magnitudes of the steering operation forces.
(Tetsuya, FIG. 2; FIG. 3; FIG. 4; Page 3, lines 32-35; Page 4, lines 8-20; Page 6, line 41-43; Page 7, lines 1-4, 6-7; Page 5 lines 32-33;
Where the operation amounts Qle and Qri are measured by operation amount sensors 42L and 42R (wherein the steering operation force of the first joystick lever and the steering operation force of the second joystick lever are measured by respective [..] sensors), and where the operation amounts Qle and Qri are processed by the moving body apparatus, implemented by a microcomputer (measured values of the steering operation forces are transmitted to a main printed circuit board (PCB)), wherein the moving body apparatus compares Qle and Qri of the right and left operation members to determine the largest operation amount, regardless of direction (and the main PCB is configured to determine magnitudes of the steering operation forces)).
Tetsuya, Masu, and Papendieck are silent on the following limitations, bolded for emphasis. However, in the same field of endeavor, Sugitani teaches:
wherein the steering operation force of the… joystick lever… measured by… torque sensor…
(Sugitani, FIG. 1; FIG. 2; FIG. 3; ¶[0044]-¶[0047];
Where controller 1 measures the driver’s manipulation of steering lever 11 using torque sensor 15 (wherein the steering operation force of the… joystick lever… measured by… torque sensor…)).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the invention of Tetsuya, Masu, and Papendieck with the features taught by Sugitani because “…the reaction force motor 19 generates a reaction force of predetermined magnitude acting in the reverse direction relative to manipulation of the lever 11, according to an amount of manipulation and a manipulation torque of the lever 11, thereby improving the performance of manipulation associated with steering.” (Sugitani, ¶[0049]).
Further, combining the invention of Tetsuya, Masu, and Papendieck with the features taught by Sugitani is an example of Combining prior art elements according to known methods to yield predictable results (combining lever sensors of Sugitani with the levers of Tetsuya) - see MPEP §2143 A, or Simple substitution of one known element for another to obtain predictable results (substituting the operation sensors of Tetsuya with the torque sensors of Sugitani)- see MPEP §2143 B, since the torque sensors provide improved performance of the reaction motor, similar to the reaction motor or Tetsuya.
Regarding claim 16, Tetsuya, Masu, Papendieck, and Sugitani teach the method of claim 15. Sugitani teaches the torque sensor, as outlined above in claim 15 (Sugitani, FIG. 1; FIG. 2; FIG. 3; ¶[0044]-¶[0047]). Tetsuya further discloses:
wherein the main PCB compares absolute values of the values measured by the respective […] sensors to determine the magnitudes of the steering operation forces regardless of a steering operation direction of the first joystick lever and a steering operation direction of the second joystick lever.
(Tetsuya, FIG. 2; FIG. 3; FIG. 4; Page 3, lines 32-35; Page 4, lines 8-20; Page 6, line 41-43; Page 7, lines 1-4, 6-7; Page 5 lines 32-33;
Where the moving body apparatus, implemented by a microcomputer, measures operation amounts Qle and Qri using operation amount sensors 42L and 42R (wherein the main PCB compares absolute values of the values measured by the respective […] sensors), wherein the moving body apparatus compares Qle and Qri of the right and left operation members to determine the largest operation amount, regardless of direction of the right and left operation members (to determine the magnitudes of the steering operation forces regardless of a steering operation direction of the first joystick lever and a steering operation direction of the second joystick lever)).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the invention of Tetsuya, Masu, and Papendieck with the features taught by Sugitani for the same reasons outlined above in claim 15.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Papendieck et al (WO 2020193142 A1) discloses device for detecting an input of a user in a vehicle (1) comprises a detection unit (2) having a first and a second detection region, for each of which a detection state and a blocked state can be activated. The input of the user by a detection region is detectable in the detection state, whereas no input by a detection region is detectable in the blocked state. The device also comprises a control unit (3), which is designed to control the detection unit (2) such that, once an input in the first detection region has been detected for a specific blocking time interval, the blocked state is activated for the second detection region. In the method, an input is detected in a first detection region and then a blocked state is activated for a second detection region for a specific blocking time interval. The first detection region and the second detection region each have a detection state and a blocked state. Inputs of the user by a detection region are detectable in the detection state, and no inputs of the user by a detection region are detectable in the blocked state. WO publication of the new prior art rejection.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tawri M McAndrews whose telephone number is (571)272-3715. The examiner can normally be reached M-W (0800-1000).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Lee can be reached at (571)270-5965. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/T.M.M./ Examiner, Art Unit 3668
/JAMES J LEE/ Supervisory Patent Examiner, Art Unit 3668