DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority and ADS
Receipt is acknowledged of the certified copy of Chinese application number 202311215928.7, with a decimal point before the “7”, filed on 20 September 2023, e.g., obtained through the PDX program.
In this respect, the examiner notes that the ADS claims priority to application number “2023112159287”, without the decimal point. It is unclear to the examiner what the legal effect(s) of the (decimal point) discrepancy between the actual application number and the application number to which priority is claimed in the ADS may be, if there is/are any. Applicant is encouraged (though not required) to prudently correct the ADS in this respect, to eliminate the (decimal point) discrepancy and any concomitant legal effects/ramifications/consequences that the examiner cannot foresee.
Drawings
The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the (displayed) “input control[s]” of claims 3, 5, 6, and 10 to 12 and the (prominently) labeled invalid instruction of claims 7 and 14 must be shown or the feature(s) canceled from the claim(s). No new matter should be entered.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
INFORMATION ON HOW TO EFFECT DRAWING CHANGES
Replacement Drawing Sheets
Drawing changes must be made by presenting replacement sheets which incorporate the desired changes and which comply with 37 CFR 1.84. An explanation of the changes made must be presented either in the drawing amendments section, or remarks, section of the amendment paper. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). A replacement sheet must include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of the amended drawing(s) must not be labeled as “amended.” If the changes to the drawing figure(s) are not accepted by the examiner, applicant will be notified of any required corrective action in the next Office action. No further drawing submission will be required, unless applicant is notified.
Identifying indicia, if provided, should include the title of the invention, inventor’s name, and application number, or docket number (if any) if an application number has not been assigned to the application. If this information is provided, it must be placed on the front of each sheet and within the top margin.
Annotated Drawing Sheets
A marked-up copy of any amended drawing figure, including annotations indicating the changes made, may be submitted or required by the examiner. The annotated drawing sheet(s) must be clearly labeled as “Annotated Sheet” and must be presented in the amendment or remarks section that explains the change(s) to the drawings.
Timing of Corrections
Applicant is required to submit acceptable corrected drawings within the time period set in the Office action. See 37 CFR 1.85(a). Failure to take corrective action within the set period will result in ABANDONMENT of the application.
If corrected drawings are required in a Notice of Allowability (PTOL-37), the new drawings MUST be filed within the THREE MONTH shortened statutory period set for reply in the “Notice of Allowability.” Extensions of time may NOT be obtained under the provisions of 37 CFR 1.136 for filing the corrected drawings after the mailing of a Notice of Allowability.
Claim (Specification) Objections
[The Claim Objections section has been divided into two parts, I. and II., below:]
I. Claims 1, 2, 4, 5, 8, 9, and 12 are objected to because of the following informalities:
in claim 1, line 6, it appears “displaying” should be indented (37 CFR 1.75(i));
in claim 1, line 6, and in claim 8, line 9, “to prompt user” should apparently read, “to prompt a user”, for grammatical correctness;
in claim 2, line 3, and in claim 9, line 2, “based on current environment” should read, “based on a current environment” for grammatical correctness;
in claim 4, line 1, “e further” should apparently read, “further”;
in claim 5, line 6, and in claim 12, line 5, “Invalidating” (with an upper-case “I”) should apparently read, “invalidating” (with a lower-case “i”);
in claim 8, line 1, “comprises” should read, “comprising” [i.e., to be a proper object of a sentence starting with "I (or we) claim", "The invention claimed is", or the equivalent, see MPEP 608.01(m)];
in claim 8, line 2, “program instruction” should apparently read, “program instructions”; and
in claim 8, line 3, “the program instruction” should apparently read, “the program instructions”. and
Appropriate correction is required.
II. Claim 13 is objected to under 37 CFR 1.75(c) as being in improper form because a multiple dependent claim should refer to other claims in the alternative only. See MPEP § 608.01(n) and 37 CFR 1.75(c). Accordingly, the claim 13 has not been further formally treated on the merits, with the examiner also provisionally indicating the manner in which claim 13 is rejectable/rejected as best understood, should the dependency issue not preclude treatment on the merits. To the extent that the dependency issue may not preclude treatment on the merits, then applicant is hereby given notice that the currently indicated (provisional) rejections of claim 13 below are, in fact, made herein in a non-provisional manner, in order to promote compact prosecution.
Claim Interpretation
Regarding claims 1 to 12 and 14 (and provisionally, claim 13), the examiner understands the claims to reflect an improvement as described in the specification to the functioning of a computer, or to any other technology or technical field - see MPEP 2106.05(a). For example only, as indicated at published paragraph [0085], through the claimed prompt/confirmation instruction and by controlling the vehicle to execute the voice command, “This application does not rely on the feasibility of the user's voice itself, but rather on the user's ability to judge the feasibility of their input voice through the generated scene map after inputting the voice, achieving the feasibility and accuracy of the user controlling the vehicle to execute instruction through voice, and ensuring the user's driving safety.”
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1 to 12 and 14 (and provisionally, claim 13) are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
In claim 1, lines 8ff, and in claim 8, lines 11ff, “the first scene map is configured to present a scenario . . .” is indefinite and not reasonably certain in the claim context (e.g., present in what way particularly, using what instrumentality, to whom or what particularly, particularly how, etc.?)
In claim 2, lines 1ff, and in claim 9, lines 1ff, “a current scene map comprising:” is grammatically incorrect and unclear (e.g., does applicant intend, “a current scene map comprises:”?)
In claim 2, line 6, “placing the vehicle in the vehicle pose” is indefinite, since the claimed “vehicle” apparently refers to a physical vehicle in claim 1, line 1, and applicant is apparently referring to a representation/image of the vehicle in this portion of the claim, which is unclearly written.
ln claim 4, line 2, “the voice commands” (plural) is unclear, with insufficient antecedent basis.
In claim 4, lines 6ff, and in claim 13, lines 6ff, “prominently labeling the corresponding images of the obstacles detecting within the driving trajectory” is unclear in its entirety (e.g., by what objective standard is “prominently” defined1, “labeling” in what way or with/on what instrumentality particularly, which “[the] corresponding images” particularly since these images apparently have insufficient antecedent basis, “of the obstacles detecting within the driving trajectory” meaning what particularly, etc.?)
In claim 5, line 6, and in claim 12, line 5, “invalidating the voice command” is indefinite, with “the voice command” having insufficient antecedent basis (e.g., is “the voice command” referring to only the first recited voice command in the claim, or is it also or instead referring to the “new voice command’ recited later in the claim?)
In claim 7, line 4, “the driving trajectory” apparently has insufficient antecedent basis and is unclear.
In claim 7, lines 4ff, and in claim 14, lines 4ff, “prominently labeling the invalid instruction” is fully indefinite (e.g., by what objective standard is “prominently” defined, “labeling” the “invalid instruction” in what way or with/on what instrumentality particularly, etc.?)
In claim 8, line 10, “generate corresponding confirmation instruction” is fully indefinite and grammatically incorrect (e.g., “corresponding” in what way particularly, and “corresponding” modifying what in the claim?)
The dependency of claim 13 is confusing and unclear, in that the claim necessarily refers to both claims 8 and 12. See MPEP § 608.01(n) and 37 CFR 1.75(c).
Claim(s) depending from claims expressly noted above are also rejected under 35 U.S.C. 112 by/for reason of their dependency from a noted claim that is rejected under 35 U.S.C. 112, for the reasons given.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 8, and 9 are rejected under 35 U.S.C. 103 as being unpatentable over McNew (2018/0029610) in view of Jafari et al. (2025/0091603).
McNew (‘610) reveals:
per claim 1, a voice-based control method for a vehicle, comprising:
when receiving a voice command [e.g., the verbal input at paragraph [0057] for identifying the candidate driving maneuver; e.g., obtained from the microphones 50 and used at step 204 in FIG. 2 to identify (obviously recognize) requests from the user], recognizing the voice command to obtain a pending voice command [e.g., for the identified driving maneuver that will be used in FIG. 2, to be rejected or performed];
planning vehicle actions of the vehicle based on the pending voice command [e.g., the driving plan generated at step 206 in FIG. 2];
constructing a current scene map of the vehicle to obtain a first scene map [e.g., FIG. 5], and
displaying the first scene map to prompt user to confirm or modify the pending voice command to correspondingly generate a confirmation instruction or an invalid instruction [e.g., paragraphs [0091]ff, etc., “For example, the queries may include, before or contemporaneously with the visual simulations of the driving plans, introductory descriptions of the predicate determinations that the overall risk of performing their driving maneuvers renders performing the driving maneuvers feasible, but with some risk, as well as descriptions that, because of the risk of their performance, the driving maneuvers are, although acceptable, only provisionally accepted. In cases of identification of the driving maneuvers by the vehicle 10, the queries may further include descriptions that offers are being made by the vehicle 10, to the user, to perform the driving maneuvers. In cases of identification of the driving maneuvers from user requests, the queries may further include descriptions that the user requests for the vehicle 10 to perform the driving maneuvers are only provisionally granted. The queries may then follow up on these descriptions with the questions of whether the user confirms the driving maneuvers notwithstanding the risk of their performance. . . . The queries are output to the user at the various interfaces implemented by the components of the audio/video system 46. . . . Similarly, the planning/decision making module 94 may, for instance, identify the user responses to queries for confirmation of the driving maneuvers from input signals transformed from corresponding mechanical inputs detected by touch screens in the displays 54. . . . If the user is not comfortable with a driving maneuver because of the risk of its performance, or otherwise, the user response to a query for confirmation of the driving maneuver could be that the user does not confirm the driving maneuver. . . . However, if a user is comfortable with a driving maneuver notwithstanding the risk of its performance, their user response to a query for confirmation of the driving maneuver could be that the user confirms the driving maneuver.”], wherein the first scene map is configured to present a scenario in which the vehicle would execute the vehicle actions [e.g., paragraph [0105], “the driving plan for performing the takeover may be visually simulated. An example visual simulation of a driving plan for performing a takeover is represented in FIG. 5 as a conceptual rendering of visual outputs at the surfaces of the displays 54 of the audio/video system 46 at a sequence of times (a)-(g). Since performing the takeover includes the performance of other driving maneuvers, such as slowing down, speeding up and lane changes, and combinations of these, it will be understood that the represented example visual simulation for performing the takeover is applicable in principle to visual simulations of driving plans for performing the included or any other driving maneuvers.”];
when the confirmation instruction is received, controlling the vehicle to execute the voice command [e.g., 222, Yes and 224 in FIG. 2]; and
when the invalid instruction is received, controlling the vehicle to ignore the voice command [e.g., 222, No and 212 in FIG. 2];
It may be alleged that McNew (‘610) does not expressly indicate that the voice command is recognized, although he teaches at paragraph [0057] that user requests for the vehicle 10 to perform driving maneuvers are identified from input signals transformed from corresponding verbal inputs detected by the microphones 50, and the examiner believes this renders obvious, to one having ordinary skill in the art, the recognition of the voice command, even without further teaching.
However, in the context/field of an improved voice controlled autonomous driving system, Jafari et al. (‘603) teaches at paragraphs [0018], [0042], [0048], [0073], etc. that a voice command generated by an occupant is converted to a text-based command (in a voice recognition module 50) based on one or more speech recognition algorithms, classified into one of a plurality of pre-compiled driving maneuvers. When an alternative driving maneuver is identified, the feedback message generation module 60 instructs the HMI 26 (a touchscreen; paragraph [0030]) to generate a message asking the occupant 40 to confirm the alternative driving maneuver. The occupant is thus afforded an opportunity to confirm or reject driving maneuvers first before the driving maneuvers are executed by the vehicle.
It would have been obvious before the effective filing date of the claimed invention to implement or modify the McNew (‘610) method and vehicle for visually simulating driving plans so that that the user requests for the vehicle 10 to perform driving maneuvers would have been identified from input signals transformed from corresponding verbal inputs detected by the microphones 50 by use of a voice recognition module (50) using one or more speech recognition algorithms, as taught by Jafari et al. (‘603), and so that the opportunity to confirm or reject the driving maneuver would have been provided via a generated message on the touch screen HMI of the display 54 in McNew (‘610), e.g., which displayed the simulation, as taught by Jafari et al. (‘603), in order to identify the verbal input using speech recognition algorithms and provide feedback to the driver for confirmation of the maneuver, as taught by Jafari et al. (‘603), with a reasonable expectation of success, and e.g., as a use of a known technique to improve similar devices (methods, or products) in the same way.
As such, the implemented or modified McNew (‘610) method and vehicle for visually simulating driving plans would have rendered obvious:
per claim 1, . . . when receiving a voice command [e.g., in McNew (‘610), the verbal input at paragraph [0057] for identifying the candidate driving maneuver; e.g., obtained from the microphones 50 and used at step 204 in FIG. 2 to identify (obviously recognize) requests from the user], recognizing the voice command to obtain a pending voice command [e.g., by using the voice recognition module (50) using one or more speech recognition algorithms as taught by Jafari et al. (‘603); and in McNew (‘610), for the identified driving maneuver that will be used in FIG. 2, to be rejected or performed];
planning vehicle actions of the vehicle based on the pending voice command [e.g., in McNew (‘610), the driving plan generated at step 206 in FIG. 2];
constructing a current scene map of the vehicle to obtain a first scene map [e.g., in McNew (‘610), FIG. 5], and
displaying the first scene map to prompt user to confirm or modify the pending voice command to correspondingly generate a confirmation instruction or an invalid instruction [e.g., the opportunity to confirm or reject the driving maneuver would have obviously been provided via a generated message on the touch screen HMI of the display 54 in McNew (‘610), e.g., which displayed the simulation, as taught by Jafari et al. (‘603); and in McNew (‘610), paragraphs [0091]ff, etc., “For example, the queries may include, before or contemporaneously with the visual simulations of the driving plans, introductory descriptions of the predicate determinations that the overall risk of performing their driving maneuvers renders performing the driving maneuvers feasible, but with some risk, as well as descriptions that, because of the risk of their performance, the driving maneuvers are, although acceptable, only provisionally accepted. In cases of identification of the driving maneuvers by the vehicle 10, the queries may further include descriptions that offers are being made by the vehicle 10, to the user, to perform the driving maneuvers. In cases of identification of the driving maneuvers from user requests, the queries may further include descriptions that the user requests for the vehicle 10 to perform the driving maneuvers are only provisionally granted. The queries may then follow up on these descriptions with the questions of whether the user confirms the driving maneuvers notwithstanding the risk of their performance. . . . The queries are output to the user at the various interfaces implemented by the components of the audio/video system 46. . . . Similarly, the planning/decision making module 94 may, for instance, identify the user responses to queries for confirmation of the driving maneuvers from input signals transformed from corresponding mechanical inputs detected by touch screens in the displays 54. . . . If the user is not comfortable with a driving maneuver because of the risk of its performance, or otherwise, the user response to a query for confirmation of the driving maneuver could be that the user does not confirm the driving maneuver. . . . However, if a user is comfortable with a driving maneuver notwithstanding the risk of its performance, their user response to a query for confirmation of the driving maneuver could be that the user confirms the driving maneuver.”], wherein the first scene map is configured to present a scenario in which the vehicle would execute the vehicle actions [e.g., in McNew (‘610), paragraph [0105], “the driving plan for performing the takeover may be visually simulated. An example visual simulation of a driving plan for performing a takeover is represented in FIG. 5 as a conceptual rendering of visual outputs at the surfaces of the displays 54 of the audio/video system 46 at a sequence of times (a)-(g). Since performing the takeover includes the performance of other driving maneuvers, such as slowing down, speeding up and lane changes, and combinations of these, it will be understood that the represented example visual simulation for performing the takeover is applicable in principle to visual simulations of driving plans for performing the included or any other driving maneuvers.”];
. . .
per claim 2, depending from claim 1, wherein constructing a current scene map comprising:
obtaining a current background image based on current environment surrounding the vehicle [e.g., the information about the environment surrounding the vehicle as gathered in step 102 of FIG. 2 in McNew (‘610)];
planning a vehicle pose based on vehicle movements [e.g., respective poses as shown in the simulation of FIG. 5 in McNew (‘610)]; and
placing the vehicle in the vehicle pose in the current background image to obtain the first scene map [e.g., as shown in FIG. 5 of McNew (‘610)];
per claim 8, a vehicle, comprises:
a memory [e.g., 82 in McNew (‘610)], storing program instruction; and
a processor [e.g., 80 in McNew (‘610)], executing the program instruction to enable the vehicle to perform a voice-based control method for the vehicle, the voice-based control method for the vehicle comprising:
when receiving a voice command [e.g., in McNew (‘610), the verbal input at paragraph [0057] for identifying the candidate driving maneuver; e.g., obtained from the microphones 50 and used at step 204 in FIG. 2 to identify (obviously recognize) requests from the user], recognizing the voice command to obtain a pending voice command [e.g., by using the voice recognition module (50) using one or more speech recognition algorithms as taught by Jafari et al. (‘603); and in McNew (‘610), for the identified driving maneuver that will be used in FIG. 2, to be rejected or performed];
planning vehicle actions of the vehicle based on the pending voice command [e.g., in McNew (‘610), the driving plan generated at step 206 in FIG. 2];
constructing a current scene map of the vehicle to obtain a first scene map [e.g., in McNew (‘610), FIG. 5], and
displaying the first scene map to prompt user to confirm or modify the pending voice command to correspondingly generate a confirmation instruction or an invalid instruction [e.g., the opportunity to confirm or reject the driving maneuver would have obviously been provided via a generated message on the touch screen HMI of the display 54 in McNew (‘610), e.g., which displayed the simulation, as taught by Jafari et al. (‘603); and in McNew (‘610), paragraphs [0091]ff, etc., “For example, the queries may include, before or contemporaneously with the visual simulations of the driving plans, introductory descriptions of the predicate determinations that the overall risk of performing their driving maneuvers renders performing the driving maneuvers feasible, but with some risk, as well as descriptions that, because of the risk of their performance, the driving maneuvers are, although acceptable, only provisionally accepted. In cases of identification of the driving maneuvers by the vehicle 10, the queries may further include descriptions that offers are being made by the vehicle 10, to the user, to perform the driving maneuvers. In cases of identification of the driving maneuvers from user requests, the queries may further include descriptions that the user requests for the vehicle 10 to perform the driving maneuvers are only provisionally granted. The queries may then follow up on these descriptions with the questions of whether the user confirms the driving maneuvers notwithstanding the risk of their performance. . . . The queries are output to the user at the various interfaces implemented by the components of the audio/video system 46. . . . Similarly, the planning/decision making module 94 may, for instance, identify the user responses to queries for confirmation of the driving maneuvers from input signals transformed from corresponding mechanical inputs detected by touch screens in the displays 54. . . . If the user is not comfortable with a driving maneuver because of the risk of its performance, or otherwise, the user response to a query for confirmation of the driving maneuver could be that the user does not confirm the driving maneuver. . . . However, if a user is comfortable with a driving maneuver notwithstanding the risk of its performance, their user response to a query for confirmation of the driving maneuver could be that the user confirms the driving maneuver.”], wherein the first scene map is configured to present a scenario in which the vehicle would execute the vehicle actions [e.g., in McNew (‘610), paragraph [0105], “the driving plan for performing the takeover may be visually simulated. An example visual simulation of a driving plan for performing a takeover is represented in FIG. 5 as a conceptual rendering of visual outputs at the surfaces of the displays 54 of the audio/video system 46 at a sequence of times (a)-(g). Since performing the takeover includes the performance of other driving maneuvers, such as slowing down, speeding up and lane changes, and combinations of these, it will be understood that the represented example visual simulation for performing the takeover is applicable in principle to visual simulations of driving plans for performing the included or any other driving maneuvers.”];
when the confirmation instruction is received, controlling the vehicle to execute the voice command [e.g., 222, Yes and 224 in FIG. 2 of McNew (‘610)]; and
when the invalid is generated, controlling the vehicle to ignore the voice command [e.g., 222, No and 212 in FIG. 2 of McNew (‘610)];
per claim 9, depending from claim 8, wherein constructing a current scene map comprising:
obtaining a current background image based on current environment surrounding the vehicle [e.g., the information about the environment surrounding the vehicle as gathered in step 102 of FIG. 2 in McNew (‘610)];
planning a vehicle pose based on vehicle movements [e.g., respective poses as shown in the simulation of FIG. 5 in McNew (‘610)]; and
placing the vehicle in the vehicle pose in the current background image to obtain the first scene map [e.g., as shown in FIG. 5 of McNew (‘610)];
Claims 3 to 7, 10 to 12, and 14 (and provisionally, claim 13) are rejected under 35 U.S.C. 103 as being unpatentable over McNew (2018/0029610) in view of Jafari et al. (2025/0091603) as applied to claims 1 and 8 above, and further in view of Hayakawa et al. (2021/0309242).
McNew (‘610) as implemented or modified in view of Jafari et al. (‘603) has been described above.
The implemented or modified McNew (‘610) method and vehicle for visually simulating driving plans may not expressly reveal that multiple scene maps are created/obtained, or that the confirmation/invalid instruction input controls are displayed in the scene map, although the examiner understands these limitations would have been obvious to one of ordinary skill in the art from the teachings of McNew (‘610) alone, e.g., when multiple candidate driving maneuvers were to be sequentially identified on the route using the flow chart of FIG. 2, to obviously make sequential lane changes, as was conventional, and to use the single (as obviously depicted in FIG. 1) display of the one or more displays 54 to obviously display both the simulation and the questions of whether the user confirms the driving maneuvers (e.g., paragraphs [0091], [0093], etc.), in order to obviously use the suggested “one” display (54) for both its indicated/described purposes (e.g., paragraph [0022], FIG. 1, etc.)
However, in the context/field of an improved vehicle travel control method and device, Hayakawa et al. (‘242) teaches using the travel control information presentation function (e.g., FIG. 3) in order to encourage the driver to confirm safety by himself/herself each time lane change is performed, such as when a lane is first changed (to the lane L3, in FIG. 3) and then subsequently changed back (to the lane L2, in FIG. 3), wherein it is detected whether the travel scene is suitable for changing lanes at predetermined time intervals (paragraph [0086], [0146], etc.) while the vehicle is being operated and the questions confirming each of the first (e.g., previous) and second (e.g., present) lane changes are presented, together with the input units for the driver’s acceptance (or non-acceptance), and the lane change destination using a visual pattern such as an arrow (FIGS. 4A to 4E, paragraphs [0079], etc.).
It would have been obvious before the effective filing date of the claimed invention to implement or modify the McNew (‘610) method and vehicle for visually simulating driving plans so that multiple driving scenes with obstacles/cars would have been simulated as in FIG. 2 (at 212) and in FIG. 5 of McNew (‘610) and presented on a presentation device (15) as taught by Hayakawa et al. (‘242), for multiple sequential lane changes, and so that, with each lane change simulation presentation, questions and input units for acceptance (151, 152, voice recognition using a microphone, a steering button, a blinker lever, etc.) would have been presented, as taught by Hayakawa et al. (‘242), and so that when a (previous or present) lane change was not accepted the invalid instruction (e.g., NG in FIG. 4B of Hayakawa et al. (‘242)) as the input unit for not accepting the lane change would have been provided/utilized by the driver, as taught by Hayakawa et al. (‘242) e.g., at S12, No, in order to not accept the lane change, with a reasonable expectation of success, and e.g., as a use of a known technique to improve similar devices (methods, or products) in the same way.
As such, the implemented or modified McNew (‘610) method and vehicle for visually simulating driving plans would have rendered obvious:
per claim 3, depending from claim 1, wherein when the confirmation instruction is received, controlling the vehicle to execute the voice command, comprises:
when the confirmation instruction is received, creating a current driving scene map of the vehicle to obtain a second scene map [e.g., 222, Yes and FIG. 5 in McNew (‘610); and the multiple travel scenes (see e.g., the right hand sides of FIGS. 4A to 4E) determined and presented for lane change acceptance by the driver in FIGS. 4A to 5E of Hayakawa et al. (‘242), which would have obviously been simulated in McNew (‘610)]; and
displaying a voice command input control in the second scene map to prompt the user to input a new voice command [e.g., paragraph [0079] and FIG. 4E in Hayakawa et al. (‘242), to prompt the use for a new lane change acceptance, where the character data of the question, “Do you accept the lane change?”, is displayed on the display (as a prompt), and voice recognition from the microphone is used to receive the driver’s input/response to the prompt, e.g., as an effective new command];
per claim 4, depending from claim 3, e further comprising:
planning a driving trajectory of the vehicle based on the voice commands executed by the vehicle [e.g., the verbal inputs in McNew (‘610) and the voice recognition in Jafari et al. (‘603) and Hayakawa et al. (‘242)];
detecting whether there are obstacles within the driving trajectory based on the background image and the driving trajectory [e.g., the other vehicles (as obstacles) in the simulated driving plans (FIG. 5) of McNew (‘610)]; and
when there are the obstacles within the driving trajectory, prominently labeling the corresponding images of the obstacles detecting within the driving trajectory [e.g., as shown in FIG. 5 of McNew (‘610) and in FIG. 3 of Hayakawa et al. (‘242)];
per claim 5, depending from claim 1, wherein when the invalid instruction is received [e.g., at 222, No in FIG. 2 of McNew (‘610); and/or at “NG”2 in FIG. 4B of Hayakawa et al. (‘242) for a previous lane change; and/or in FIG. 4E of Hayakawa et al. (‘242), by not making the input via the input device or voice recognition indicating acceptance of the lane change], controlling the vehicle to ignore the voice command, including:
when the invalid instruction is received, displaying a command input control in the first scene map to prompt the user to input a new voice command [e.g., paragraph [0079] and FIG. 4E in Hayakawa et al. (‘242), to prompt the use for a new lane change acceptance, where the character data of the question, “Do you accept the lane change?”, is displayed on the display (as a prompt), and voice recognition from the microphone is used to receive the driver’s input/response to the prompt]; and
Invalidating the voice command [e.g., when the acceptance is not received, in Hayakawa et al. (‘242); and similarly, at 222, No in FIG. 2 of McNew (‘610)];
per claim 6, depending from claim 1, further comprising:
when constructing the first scene map, displaying confirmation instruction input controls and invalid instruction input controls in the first scene map to remind the user to input the confirmation instruction or invalid instruction [e.g., as shown in FIG. 4B of Hayakawa et al. (‘242)];
per claim 7, depending from claim 6,
wherein when the invalid instruction is received, voice-based control method for the vehicle further comprises:
when there are obstacles within the driving trajectory, prominently labeling the invalid instruction [e.g., the “NG” labeled instruction in FIG. 4B of Hayakawa et al. (‘242), obviously labeled when the driving plan simulation or the travel scene includes other vehicles, as taught by both McNew (‘610) and Hayakawa et al. (‘242)];
per claim 10, depending from claim 8, wherein when the confirmation instruction is received, controlling the vehicle to execute the voice command, comprises:
when the confirmation instruction is received, creating a current driving scene map of the vehicle to obtain a second scene map [e.g., 222, Yes and FIG. 5 in McNew (‘610); and the multiple travel scenes (see e.g., the right hand sides of FIGS. 4A to 4E) determined and presented for lane change acceptance by the driver in FIGS. 4A to 5E of Hayakawa et al. (‘242), which would have obviously been simulated in McNew (‘610)]; and
displaying a voice command input control in the second scene map to prompt the user to input a new voice command [e.g., paragraph [0079] and FIG. 4E in Hayakawa et al. (‘242), to prompt the use for a new lane change acceptance, where the character data of the question, “Do you accept the lane change?”, is displayed on the display (as a prompt), and voice recognition from the microphone is used to receive the driver’s input/response to the prompt, e.g., as an effective new command];
per claim 11, depending from claim 10, wherein the voice-based control method for the vehicle further comprising:
when constructing the first scene map, displaying confirmation instruction input controls and invalid instruction input controls in the first scene map to remind the user to input the confirmation instruction or invalid instruction [e.g., as shown in FIG. 4B of Hayakawa et al. (‘242)];
per claim 12, depending from claim 8, wherein when the invalid instruction is received, controlling the vehicle to ignore the voice command, including:
when the invalid instruction is received, displaying a command input control in the first scene map to prompt the user to input a new voice command [e.g., paragraph [0079] and FIG. 4E in Hayakawa et al. (‘242), to prompt the use for a new lane change acceptance, where the character data of the question, “Do you accept the lane change?”, is displayed on the display (as a prompt), and voice recognition from the microphone is used to receive the driver’s input/response to the prompt]; and
Invalidating the voice command [e.g., when the acceptance is not received, in Hayakawa et al. (‘242); and similarly, at 222, No in FIG. 2 of McNew (‘610)];
per claim 13, depending from claim 12, wherein the vehicle of claim 8 further comprises:
planning a driving trajectory of the vehicle based on the voice commands executed by the vehicle [e.g., the verbal inputs in McNew (‘610) and the voice recognition in Jafari et al. (‘603) and Hayakawa et al. (‘242)];
detecting whether there are obstacles within the driving trajectory based on the background image and the driving trajectory [e.g., the other vehicles (as obstacles) in the simulated driving plans (FIG. 5) of McNew (‘610)]; and
when there are obstacles within the driving trajectory, prominently labeling the corresponding images of the obstacles detecting within the driving trajectory [e.g., as shown in FIG. 5 of McNew (‘610) and in FIG. 3 of Hayakawa et al. (‘242)];
per claim 14, depending from claim 13, wherein when the invalid instruction is received, the voice-based control method for the vehicle further comprises:
when there are obstacles within the driving trajectory, prominently labeling the invalid instruction [e.g., the “NG” labeled instruction in FIG. 4B of Hayakawa et al. (‘242), obviously labeled when the driving plan simulation or the travel scene includes other vehicles, as taught by both McNew (‘610) and Hayakawa et al. (‘242)];
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to David A Testardi whose telephone number is (571)270-3528. The examiner can normally be reached Monday, Tuesday, Thursday, 8:30am - 5:30pm E.T., and Friday, 8:30 am - 12:30 pm E.T.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rachid Bendidi can be reached at (571) 272-4896. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
[This part of the page intentionally left blank.]
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID A TESTARDI/Primary Examiner, Art Unit 3664
1 See MPEP 2173.05(b), IV.
2 Meaning, “no good” (as the opposite of “O.K.”) in standard Japanese patent parlance.