DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: 32. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
The drawings are objected to because Figure 5A 510 reads "YOUCOME" which appears to be a typographical error and should read "YOU COME" to improve clarity. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The disclosure is objected to because of the following informalities: ¶ 0098 line 7, ¶ 0126 line 2, and ¶ 0129 line 2 read "there are a plurality" which appears to be a conjugation error and should read "there is a plurality" to improve clarity.
Appropriate correction is required.
Claim Interpretation
An “ultra-compact mobility vehicle” is interpreted as a micromobility vehicle in light of ¶ 0002: “ultra-compact mobility vehicle (also referred to as a micro mobility vehicle)”. A micromobility vehicle is interpreted as known in the art as a vehicle under 500 kg. See “Micromobility” from Wikipedia for evidence of this definition as known in the art.
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
In claim 1, the "instruction acquisition unit" in the limitation "an instruction acquisition unit configured to acquire instruction information of the user" invokes 112(f) as "unit" is a term that does not have definite structure which enables the acquisition of instruction information.
In claim 1, the "image acquisition unit" in the limitation "an image acquisition unit configured to acquire a captured image captured in the moving object" invokes 112(f) as "unit" is a term that does not have definite structure which enables the acquisition of an image.
In claim 1, the "determination unit" in the limitation "a determination unit configured to determine a stop position of the moving object" invokes 112(f) as "unit" is a term that does not have definite structure which enables the determination of a stop position.
In claim 18, the "utterance acquisition unit" in the limitation "an utterance acquisition unit configured to acquire instruction information of the user" invokes 112(f) as "unit" is a term that does not have definite structure which enables the acquisition of utterance information.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
Regarding the “instruction acquisition unit”, a review of the specification does not appear to include any explicit structure for this unit. However, the examiner understands that the communication device recited in ¶ 0043 likely is meant as being the structure of this unit as it fulfills the recited claimed limitation:
"The control unit 30 acquires detection results of the detection units 15 to 17, input information of an operation panel 31, voice information input from a voice input device 33, the utterance information from the communication device 120, and the like to execute corresponding processing." (Emphasis added.)
While this details obtaining “utterance information”, utterance information is equivalent to instruction information as disclosed throughout the specification hence why the communication device fulfills this limitation. At least Figure 1 shows the communication device 120 as a mobile phone.
PNG
media_image1.png
235
255
media_image1.png
Greyscale
Regarding the “image acquisition unit”, a review of the specification does not appear to reveal any explicit structure for this unit. However, the examiner understands that at least one of the detection unit as recited in ¶ 0040 are likely meant to be the structure of this unit as it fulfills the recited claimed limitation:
“The vehicle 100 includes detection units 15 to 17 that detect targets around the vehicle 100. The detection units 15 to 17 are a group of external sensors that monitors the surroundings of the vehicle 100, and in the case of the present embodiment, each of the detection units 15 to 17 is an imaging device that captures an image of the surroundings of the vehicle 100 and includes, for example, an optical system such as a lens and an image sensor. In the vehicle 100, in addition to the imaging device, a radar or a light detection and ranging (LiDAR) can also be used. The vehicle 100 can acquire a position (hereinafter, referred to as relative position) of a specific person or a specific target viewed from a coordinate system of the vehicle 100 based on image information obtained by the detection unit. The relative position can be indicated as, for example, a position of 1 m on the left and a position of 10 m in front.” (Emphasis added.)
Regarding the “determination unit”, a review of the specification does not appear to reveal any explicit structure for this unit (see 112(b) below). However, a generic structure can at least be interpreted from the specification for the purpose of examination. Figure 4 shows that the determination unit 416 is part of the interaction unit 401.
PNG
media_image2.png
483
329
media_image2.png
Greyscale
The interaction unit is given explicit structure in at least ¶ 0048:
“The software configuration according to the present embodiment includes an interaction unit 401, a vehicle control unit 402, and a database 403. The interaction unit 401 performs processing for the voice information (utterance information) transmitted and received to and from the communication device 120, processing for the image information acquired by the detection unit 15 or the like, processing for estimating the stop position, and other processing.” (Emphasis added.)
It can thus be assumed that the determination unit is software since it is comprised within the interaction unit which can be assumed as software. However, this is not evidently software since a control unit (also contained in the software configuration 100 detailed above) is known physical structure, not pure software and is thus not satisfactory under 112(b) (see below). (See ¶ 0042 for evidence that the control unit is a conventional control unit known in the art and not pure software.)
Regarding the “utterance acquisition unit”, a review of the specification does not appear to include any explicit structure for this unit. However, the examiner understands that the voice input device recited in ¶ 0043 likely is meant as being the structure of this unit as it fulfills the recited claimed limitation:
"The control unit 30 acquires detection results of the detection units 15 to 17, input information of an operation panel 31, voice information input from a voice input device 33, the utterance information from the communication device 120, and the like to execute corresponding processing." (Emphasis added.)
While the communication device 120 does obtain utterance information, it is not comprised in the mobile object as required in claim 18 and thus cannot be structure for this unit. Instead, the voice input device must be assumed as structural for the unit. Figure 3 shows the voice input device as a microphone.
PNG
media_image3.png
69
97
media_image3.png
Greyscale
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Examiner’s note: Examiner respectfully requests the applicant amend the specification to make the structural aspects of each unit clearly explicit or amend the claims to prevent calling 112(f). While only the determination unit is rejected under 112(b) and the applicant is not required to utilize verbatim language within the specification as recited within the claims (and thus cannot be objected to for this reason), the application suffers from lack of clarity when claimed units are not explicitly provided structure that may introduce difficulties regarding fully understanding the intended structure of the invention from the publication alone for any future examiner, attorney, inventor, or otherwise.
Examiner further notes that any response from applicant without amendment and/or without evidence to proper structure recited within the specification denying the interpretation above may result in a 112(b) rejection for indefiniteness.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The term “high” in claim 11 is a relative term which renders the claim indefinite. The term “high” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. It is unclear how high the probability has to be before the targets are determined as candidates. For the purpose of examination, “high” will be interpreted as above a predetermined threshold.
Regarding claims 1 and 17-20, the phrase “image captured in the moving object” is unclear in light of the specification. From the specification and drawings, the detection units (interpreted as the image acquisition unit as detailed in the 112(f) interpretation above) face outwards (Figure 2A) and capture an external environment (¶ 0040). This appears contradictory to the claim language that recites “acquiring a captured image captured in the moving object”. As written, the claim language seems to indicate that the captured image is an interior of the object, but the specification and drawings seem to indicate that the captured image is a periphery of the object. Due to this, the limitation is unclear and thus indefinite. For the purpose of examination, the captured image will be interpreted as an environment of the object wherein the image acquisition unit is in the mobile object.
Regarding claims 1, 18, and 16; the claim limitation “determination unit” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. As detailed in the 112(f) section above, the determination unit is detailed as being comprised in a software configuration; however, since a control unit, recited within the specification as being hardware and is known hardware in the art, is also part of the software configuration, it is unclear whether the determination unit is additional software or is a hardware component utilizing software. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim(s) 2-16 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being dependent on rejected claim 1 and failing to cure the deficiencies listed above.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because they recite “a storage medium storing a program for causing a computer to perform a moving object control method”; however, the storage medium is not adequately limited within the specification or the claim to preclude transitory storage mediums including, for example, signals per se. As recited in MPEP § 2106.03(I), signals per se is not one of the four statutory categories.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 5, 10, 17, 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Uehara et al. US 20200262454 A1 (hereinafter Uehara) in view of Hicok et al. US 20190265703 A1 (hereinafter Hicok).
Regarding claim 1, Uehara teaches a moving object control apparatus that adjusts a stop position of a moving object based on an instruction of a user, the moving object control apparatus comprising:
an instruction acquisition unit comprising a communication device (Figure 4 shows a user's mobile phone 4) configured to acquire instruction information of the user (¶ 0092-0093 disclose that the user's mobile phone 4 acquires a taxiing request);
an image acquisition unit comprising an image sensor (Figure 5 shows a camera 33 on the vehicle) configured to acquire a captured image captured in the moving object (¶ 0063 discloses capturing an image);
a determination unit (at least ¶ 0051 discloses at least route setting is performed by a program) configured to determine a stop position of the moving object (Figure 9 S4-S10); and
a control unit (¶ 0066 discloses an electronic control unit in the travel control device 30) configured to control traveling of the moving object to cause the moving object to travel toward the determined stop position (¶ 0060 discloses the travel control device makes the vehicle travel towards the destination; see also ¶ 0096),
wherein the determination unit (i) determines a first stop position using position information of a communication device used by the user or position information corresponding to a destination included in first instruction information of the user (¶ 0093 discloses a dispatch request, used for navigation of the vehicle as shown in Figure 9, includes a destination in which a user would like to board), and (ii) determines a second stop position based on second instruction information of the user (¶ 0101-0102 discloses obtaining position identifying information from a user and navigating to the identified position) and a region of a predetermined target identified in the captured image (¶ 107-111 discloses navigating to a final stop position based on comparison between the user's image and image captured by the vehicle) in response to a position of the moving object falling within a predetermined distance from the first stop position by traveling of the moving object (¶ 0097-0099 discloses requesting position identifying information when a vehicle is within a predetermined range from the position saved in memory).
Uehara does not teach an image acquisition unit comprising a lens and an image sensor.
Hicok teaches an image acquisition unit comprising a lens and an image sensor (¶ 0300 discloses a plurality of cameras with a plurality of lens to capture a periphery of a vehicle).
It would have been prima facie obvious to one of ordinary skill in the art at the time of filing to have modified Uehara to incorporate the teachings of Hicok. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function, but in the very combination itself, that is in the substitution of the lenses and cameras of Hicok for the generic camera of Uehara. Thus, the simple substitution of one known element for another producing a predictable result of capturing a vehicle’s periphery renders the claim obvious.
Regarding claim 2, the modified Uehara reference teaches all of claim 1 as detailed above. Uehara further teaches that the determination unit determines the second stop position (¶ 0101-0102 discloses obtaining position identifying information from a user and navigating to the identified position) by identifying designation of the predetermined target from the second instruction information of the user (¶ 0108-109 discloses matching features in the vehicle's environment with features in the user's image) and then identifying the region of the predetermined target from the captured image (¶ 109 discloses detecting regions of features in the vehicle environment).
Regarding claim 5, the modified Uehara reference teaches all of claim 1 as detailed above. Uehara further teaches that the determination unit determines the first stop position using one or more pieces of instruction information including a pick-up request (Figure 9 discloses transmitting specified position information in response to receiving a dispatch request) and the destination (¶ 0093 discloses a dispatch request, used for navigation of the vehicle as shown in Figure 9, includes a destination in which a user would like to board).
Regarding claim 10, the modified Uehara reference teaches all of claim 1 as detailed above. Uehara further teaches that the determination unit calculates a probability distribution indicating a probability of being the stop position for regions of one or more targets identified in the captured image (¶ 0107-0109 discloses comparing features in a vehicle's image and a user's image to determine relevance level, i.e. probability of being a stop position), and determines the second stop position based on a region of a target having a highest probability (¶ 0110-0111 discloses when the relevance level is above a threshold, i.e. a "highest" probability, the vehicle is determined as having arrived at the pick-up position).
Regarding claim 17, claim 17 recites the same method of claim 1 and therefore the same grounds of rejection apply.
Regarding claim 19, claim 19 recites the same method of claim 1 and therefore the same grounds of rejection apply. Only new limitations not presented in claim 1 will be further discussed. Uehara further teaches that the information processing method is executed by an information processing apparatus (¶ 0049-0051 and 0082-0086 at least detail memories that store various programs to perform various function using processors of a dispatch device; see Figure 2 for evidence of the display device being a computer since it has a memory and processor; see Figure 1 where a user terminal is a mobile phone 4; see also ¶ 0066-0069), the method further comprising:
transmitting, to the moving object, a control command for controlling traveling of the moving object to cause the moving object to travel toward the determined stop position (¶ 0095-0096 discloses sending a commanded location for the vehicle to travel to wherein the vehicle travels to said location based on the received data).
Regarding claim 20, claim 20 recites the same method of claim 1 and therefore the same grounds of rejection apply. Only new limitations not presented in claim 1 will be further discussed. Uehara further teaches a storage medium storing a program for causing a computer to perform the moving object control method (¶ 0049-0051 and 0082-0086 at least detail memories that store various programs to perform various function using processors of a dispatch device; see Figure 2 for evidence of the display device being a computer since it has a memory and processor; see also ¶ 0066-0069).
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Uehara as modified by Hicok as applied to claim 1 above, and further in view of Shimotani et al. US 20220295017 A1 (hereinafter Shimotani) and Chian Li US 11880800 B2 (hereinafter Li).
Regarding claim 3, the modified Uehara reference teaches all of claim 1 as detailed above. Uehara further teaches identifying the region of the predetermined target from the captured image based on the second instruction information of the user (¶ 107-111 discloses navigating to a final stop position based on comparison between the user's image and image captured by the vehicle).
Uehara does not teach determining the second stop position by identifying the user in the captured image based on the instruction information of the user and the captured image.
Shimotani teaches determining the second stop position by identifying the user in the captured image based on the instruction information of the user and the captured image (at least Figure 10 shows a user can indicate their position on an image captured by a vehicle using a mobile terminal; see also ¶ 0138-0142).
It would have been prima facie obvious to one of ordinary skill in the art at the time of filing to have further modified Uehara to incorporate the teachings of Shimotani such that a user of Uehara can further select where they are in a vehicle's image so the vehicle can stop at the user’s position as taught by Shimotani. This modification would be made with a reasonable expectation of success to reduce confusion of which person is the user the vehicle should stop for and pick up in crowded areas.
Uehara does not teach identifying the region of the predetermined target from the captured image based on an action of the user identified in the captured image.
Li teaches identifying the region of the predetermined target from the captured image based on an action of the user identified in the captured image (col. 6 lines 31-60 discloses that after detection of a user, the vehicle will either remain stopped at the current location or drive to the user based on a detected user's action; here, the user is considered the target).
It would have been prima facie obvious to one of ordinary skill in the art at the time of filing to have further modified Uehara to incorporate the teachings of Li such that once a vehicle has been detected to be in a location relevant to the user's image as taught by Uehara, the vehicle may further choose to stop in the vicinity of the user or move further closer to the user based on a user's action as taught by Li. This modification would be made with a reasonable expectation of success to allow for easy implementation of user pick-up by allowing waiting for a user to approach the vehicle, and further improve user experience and prevent confusion and awkwardness by driving the vehicle to the user if the user does not approach the vehicle as disclosed by Li (col. 12 lines 40-44).
Claim(s) 4, 15, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Uehara as modified by Hicok as applied to claim 1 above, and further in view of Shimotani.
Regarding claim 4, the modified Uehara reference teaches all of claim 1 as detailed above. Uehara further teaches that the determination unit determines the first stop position (¶ 0093 discloses a dispatch request, used for navigation of the vehicle as shown in Figure 9, includes a destination in which a user would like to board) in response to reception of instruction information including a pick-up request (Figure 9 discloses transmitting specified position information in response to receiving a dispatch request).
Uehara does not teach determining the first stop position using the position information of the communication device.
Shimotani teaches determining the first stop position using the position information of the communication device (¶ 0094 discloses obtaining a user's position based on location of a mobile terminal; ¶ 0066 discloses the mobile terminal may be a mobile phone).
It would have been prima facie obvious to one of ordinary skill in the art at the time of filing to have further modified Uehara to incorporate the teachings of Shimotani such that when the user submits a pick-up request according to Uehara, the location of the user's phone can be utilized for determination of the pick-up location. This modification would be made with a reasonable expectation of success to increase speed of entering information such that a user is not required to manually enter a location and thus further improving user experience.
Regarding claim 15, the modified Uehara reference teaches all of claim 1 as detailed above.
Uehara does not teach that the instruction acquisition unit acquires the instruction information based on utterance information of the user.
Shimotani teaches that the instruction acquisition unit acquires the instruction information based on utterance information of the user (¶ 0133, for example, discloses receiving a voice indicating position identifying information of the user).
It would have been prima facie obvious to one of ordinary skill in the art at the time of filing to have further modified Uehara to incorporate the teachings of Shimotani such that the position identifying information of Uehara can further include received voice information of Shimotani. This modification would be made with a reasonable expectation of success to improve ease of operation of the taxiing system and improve accessibility for people with disabilities wherein touch screen operation alone is insufficient.
Regarding claim 18, claim 18 recites the same method of claim 1 and therefore the same grounds of rejection apply. Only new limitations not presented in claim 1 will be further discussed.
Uehara does not explicitly disclose an utterance acquisition unit comprising a microphone configured to acquire instruction information of the user.
Shimotani teaches an utterance acquisition unit comprising a microphone configured to acquire instruction information of the user (¶ 0133, for example, discloses receiving a voice indicating position identifying information of the user; the reception of voice information implies the existence of a microphone).
It would have been prima facie obvious to one of ordinary skill in the art at the time of filing to have further modified Uehara to incorporate the teachings of Shimotani such that the position identifying information of Uehara can further include received voice information of Shimotani. This modification would be made with a reasonable expectation of success to improve ease of operation of the taxiing system and improve accessibility for people with disabilities wherein touch screen operation alone is insufficient.
While the voice information of Shimotani is transmitted from a user’s mobile device to the mobile object, the mere rearrangement of parts such that the vehicle itself would contain the microphone for recording a user’s voice would have been prima facie obvious to one having ordinary skill in the art at the time of filing since it has been held that rearranging the location of elements without affecting operation of the elements involves only routine skill in the art. See MPEP 2144.04(VI)(C) and the court cases cited therein.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Uehara as modified by Hicok as applied to claim 1 above, and further in view of Nakayamada JP 2019067012 A (hereinafter Nakayamada; examiner relies upon the translated copy provided by the applicant).
Regarding claim 6, the modified Uehara reference teaches all of claim 1 as detailed above.
Uehara does not teach that the control unit causes the moving object to travel at a traveling speed reduced according to a predetermined standard in response to the position of the moving object falling within the predetermined distance from the first stop position.
Nakayamada teaches that the control unit causes the moving object to travel at a traveling speed reduced according to a predetermined standard in response to the position of the moving object falling within the predetermined distance from the first stop position (¶ 0022-0023 discloses decelerating the vehicle to a slower speed when entering an allocation area).
It would have been prima facie obvious to one of ordinary skill in the art at the time of filing to have modified Uehara to incorporate the teachings of Nakayamada such that when the vehicle enters the dispatch area as taught by Uehara, the vehicle's speed can be reduced according to Nakayamada. This modification would be made with a reasonable expectation of success to improve probability of detecting a taxi requesting user as taught by Nakayamada (¶ 0022).
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Uehara as modified by Hicok and Nakayamada as applied to claim 6 above, and further in view of Park et al. US 20220073099 A1 (hereinafter Park).
Regarding claim 7, the modified Uehara reference teaches all of claim 6 as detailed above.
Uehara does not teach that the control unit reduces the traveling speed according to a distance from the position of the moving object to a position of the predetermined target. While Nakayamada may also be interpreted as teaching this limitation in ¶ 0022-0023 wherein the target would be an allocation area, for the purpose of completeness of record, Park is utilized for this limitation.
Park teaches that the control unit reduces the traveling speed according to a distance from the position of the moving object to a position of the predetermined target (¶ 0123 discloses reducing a vehicle's speed to a target speed when the distance to a stop location is a target distance then reducing the speed to zero after traveling for a certain period; see also ¶ 0129 for a different example stopping profile).
It would have been prima facie obvious to one of ordinary skill in the art at the time of filing to have further modified Uehara to incorporate the teachings of Park such that when the vehicle detects the relevancy over a threshold and thus detects the matched features as taught by Uehara, a distance to the features detected by Uehara can be calculated and when the distance meets a target distance, the vehicle's speed can be reduced to a preset speed and then further reduced to zero as taught by Park. This modification would be made with a reasonable expectation of success to optimize stopping profile to provide a smooth and exact stop based on target location.
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Uehara as modified by Hicok as applied to claim 1 above, and further in view of Xiao US 20210383699 A1 (hereinafter Xiao).
Regarding claim 11, the modified Uehara reference teaches all of claim 1 as detailed above. Uehara further teaches that the determination unit calculates a probability distribution indicating a probability of being the stop position for regions of a plurality of targets identified in the captured image (¶ 0107-0109 discloses comparing features in a vehicle's image and a user's image to determine relevance level, i.e. probability of being a stop position).
Uehara does not teach determining the second stop position according to a distance to each of targets as candidates using a predetermined number of target regions for which the probability is high as the candidates.
Xiao teaches determining the second stop position according to a distance to each of targets as candidates using a predetermined number of target regions for which the probability is high as the candidates (Figure 3 discloses determining a parking location as the location having a smallest distance S309 of a collection of reference locations with the highest priority levels S301; see also ¶ 0029).
It would have been prima facie obvious to one of ordinary skill in the art at the time of filing to have further modified Uehara to incorporate the teachings of Xiao such that once the image data is matched according to the teachings of Uehara, matching parking locations with the highest priority levels can be compared to determine the parking location with the least distance to the vehicle to determine the stopping location as taught by Xiao. This modification would be made with a reasonable expectation of success to reduce fuel consumption and time spent parking.
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Uehara as modified by Hicok as applied to claim 1 above, and further in view of Chian Li US 20180374002 A1 (hereinafter Chian).
Regarding claim 12, the modified Uehara reference teaches all of claim 1 as detailed above.
Uehara does not teach that when the instruction acquisition unit acquires instruction information related to another destination or another target while the moving object travels toward the determined stop position, the determination unit determines a new stop position related to the another destination or the another target while continuing traveling.
Chian teaches that when the instruction acquisition unit acquires instruction information related to another destination or another target while the moving object travels toward the determined stop position (¶ 0091 discloses a user can select a new destination anytime whether the vehicle is traveling or stopped), the determination unit determines a new stop position related to the another destination or the another target while continuing traveling (¶ 0091 discloses the newly entered destination replaces the old destination).
It would have been prima facie obvious to one of ordinary skill in the art at the time of filing to have further modified Uehara to incorporate the teachings of Chian such that when the vehicle operates according to Uehara, the user can, at any time, specify a new destination for the vehicle to travel to according to Chian. This modification would be made with a reasonable expectation of success to improve user experience and flexibility of the system by allowing destination changing.
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Uehara as modified by Hicok as applied to claim 1 above, and further in view of Lynch et al. US 20220092718 A1 (hereinafter Lynch).
Regarding claim 13, the modified Uehara reference teaches all of claim 1 as detailed above.
Uehara does not teach that the instruction acquisition unit acquires instruction information related to another destination or another target while the moving object travels toward the determined stop position, the determination unit transmits additional instruction information for narrowing down the another destination or the another target to the communication device.
Lynch further teaches that the instruction acquisition unit acquires instruction information related to another destination or another target while the moving object travels toward the determined stop position (¶ 0031 discloses that, while traveling on a route to a pickup location, the user device may determine that the vehicle may need to reroute to a different destination due to road blocks), the determination unit transmits additional instruction information for narrowing down the another destination or the another target to the communication device (¶ 0031 discloses notifying a user via the user device of the new pickup location and requesting authorization for the new pickup location).
It would have been prima facie obvious to one of ordinary skill in the art at the time of filing to have further modified Uehara to incorporate the teachings of Lynch such that the method of Uehara can check for road closures and blockages that may require a different pickup destination to be suggested as taught by Lynch. This modification would be made with a reasonable expectation of success to improve user experience and speed of pickup by avoiding blocked or slowed roads.
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Uehara as modified by Hicok as applied to claim 1 above, and further in view of Zhang US 20240166243 A1 (hereinafter Zhang).
Regarding claim 14, the modified Uehara reference teaches all of claim 1 as detailed above. Uehara further teaches that the first stop position is determined using an absolute position that is a position of a target from a position based on a specific geographic coordinate (¶ 0097 discloses using GNSS to determine location relative to a specified location indicating specified location is in geographic coordinates as well).
Uehara does not explicitly teach that the second stop position is determined using a relative position of the target viewed from a coordinate system of the moving object.
Zhang teaches that that the second stop position is determined using a relative position of the target viewed from a coordinate system of the moving object (¶ 0063 discloses determining a relative distance between a vehicle and a user and determining navigation instructions based on the relative distance).
It would have been prima facie obvious to one of ordinary skill in the art at the time of filing to have further modified Uehara to incorporate the teachings of Zhang such that when a user cannot identify a vehicle according to the teachings of Uehara, communication between a user and vehicle will occur wherein relative distance is determined between the vehicle and user and the vehicle is controlled accordingly to bring the vehicle to the user as taught by Zhang. This modification would be made with a reasonable expectation of success to resolve problems of inaccurate pick-up points and positioning deviation and further improve riding convenience and flexibility as disclosed in Zhang (¶ 0064).
Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Uehara as modified by Hicok as applied to claim 1 above, and further in view of Sakurada et al. US 20220270490 A1 (hereinafter Sakurada).
Regarding claim 16, the modified Uehara reference teaches all of claim 1 as detailed above.
Uehara does not teach that the moving object is an ultra-compact mobility vehicle.
Sakurada teaches that the moving object is an ultra-compact mobility vehicle (¶ 0041 "micromobility vehicle").
It would have been prima facie obvious to one of ordinary skill in the art at the time of filing to have further modified Uehara to incorporate the teachings of Sakurada. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function, but in the very combination itself, that is in the substitution of the micromobility vehicle of Sakurada for the generic vehicle of Uehara. Thus, the simple substitution of one known element for another producing a predictable result of autonomously transporting passengers renders the claim obvious.
Allowable Subject Matter
Claims 8-9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if rewritten to overcome the 112(b) rejection above.
The following is a statement of reasons for the indication of allowable subject matter:
Uehara teaches that the determination unit calculates a probability distribution indicating a probability of being the stop position for regions of one or more targets identified in the captured image (¶ 0107-0109 discloses comparing features in a vehicle's image and a user's image to determine relevance level, i.e. probability of being a stop position), and determines the second stop position based on a region of a target having a highest probability (¶ 0110-0111 discloses when the relevance level is above a threshold, i.e. a "highest" probability, the vehicle is determined as having arrived at the pick-up position).
However, Uehara fails to teach that the control unit lowers the traveling speed as the probability assigned to the target corresponding to the second stop position is lower or that the control unit controls the traveling speed to be higher than the traveling speed reduced according to the predetermined standard as the probability assigned to the target corresponding to the second stop position is higher.
Instead, these limitations appear counterintuitive in light of Uehara which stops the vehicle upon determining that the relevance is above a threshold (¶ 0111). Therefore, in light of Uehara, one of ordinary skill in the art would have no motivation to adjust vehicle speed proportional to a probability metric as doing so would effectively destroy the functioning of Uehara such that, instead of stopping at a pickup location, the vehicle would speed off and away from the pickup location. Therefore, the claimed limitations appear to be novel and non-obvious in light of the prior art of record.
Documents Considered but not Relied Upon
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Grace et al. US 12062290 B1 discloses an autonomous vehicle dispatch system which tracks a user’s phone and autonomously adjusts the pickup location based on user’s location as the user’s location changes.
Marczuk et al. US 20200377128 A1 discloses an autonomous vehicle dispatch system that can suggest an alternative pickup location based on traffic congestion around the user specified pickup location.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ashley Tiffany Schoech whose telephone number is (571)272-2937. The examiner can normally be reached 5:00 am - 3:30 pm PT Monday - Thursday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Piateski can be reached at 571-270-7429. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.T.S./Examiner, Art Unit 3669
/Erin M Piateski/Supervisory Patent Examiner, Art Unit 3669