DETAILED ACTION
This is a non-final office action on the merits. Claims 1-13 are pending an addressed below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/8/2024 is being considered by the examiner.
Documents listed but not submitted were found with parent application 17349210.
Non-English documents have been considered in as much as the translated portions and drawings provided (see MPEP 609).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 8-10 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 8 recites “wherein when it is determined that the second target is present during a period in which sound is being emitted to or collected from the first target”. It is not clear what is doing this determination. It is not clear if this claim limitation is within the scope of the claimed invention, including the unmanned moving body, the directional microphone, and the processor.
Claim 9 recites “wherein when it is determined that the second target is present during a period in which sound is being emitted to or collected from the first target”. It is not clear what is doing this determination. It is not clear if this claim limitation is within the scope of the claimed invention, including the unmanned moving body, the directional microphone, and the processor.
Claim 10 recites “wherein when it is determined that the second target is present during a period in which sound is being emitted to or collected from the first target”. It is not clear what is doing this determination. It is not clear if this claim limitation is within the scope of the claimed invention, including the unmanned moving body, the directional microphone, and the processor.
All dependent claims of these claims are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, by virtue of their dependency.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 8, 11, 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over SASAKI et al. (JP-2019036174-A a reference in IDS 11/8/2024 with translation provided in the IDS being cited) in view of KOBAYASHI et al. (JP-2012076162-A a reference in IDS 11/8/2024 with translation provided in the IDS being cited).
Regarding claims 1 and 13, SASAKI et al. teaches:
An unmanned moving body, comprising:
a directional microphone that collects sound from an orientation direction; and
a processor that obtains one or more instances of sensing data including data obtained from the directional microphone,
wherein the processor:
determines a first position of the unmanned moving body in accordance with the positional relationship and causes the unmanned moving body to move to the first position, the first position being a position that places the first target and the second target within a range over which the sound is collected by the directional microphone at at least a predetermined quality
(at least figs. 1-4, 6 [0010]-[0074] [0086]-[0103] discussed input/output device 100, control device 10, microphone 50/acceptance function, specifying unit 42, a plurality of users U, moving input/output device 100 in direction of user; in particular [0015]-[0022] [0050]-[0055])
SASAKI et al. does not explicitly teach:
determines whether or not a second target is present in a vicinity of a first target in accordance with at least one of the one or more instances of sensing data,
calculates a positional relationship between the first target and the second target from at least one of the one or more instances of sensing data when it is determined that the second target is present,
However, KOBAYASHI et al. teaches:
determines whether or not a second target is present in a vicinity of a first target in accordance with at least one of the one or more instances of sensing data,
calculates a positional relationship between the first target and the second target from at least one of the one or more instances of sensing data when it is determined that the second target is present,
(at least figs. 1-5B [0006]-[0056] discussed conversation robot that get into position to have conversation with a group of users, such as user A and user B, or user A and user B and user C; discussed conversation robot rotates head portion 3 to have its front face 3a directed to the main attention target direction/speaker or main listener of the group of users; discussed conversation robot rotates so that the upper body portion 8 so that body front face 8b directed to center of gravity direction line among the users; in particular at least [0024]-[0025], fig. 3 [0026]-[0040] claim 6 discussed conversation robot circuit configuration including various units, microphone, camera; discussed identifying user A and user B as participants in conversation; [0063]) for conversation ([0006]-[0056]);
It would have been obvious to one of ordinary skill in the art at the time of filing and at the time of the invention to modify the system and method of SASAKI et al. with determines whether or not a second target is present in a vicinity of a first target in accordance with at least one of the one or more instances of sensing data, and calculates a positional relationship between the first target and the second target from at least one of the one or more instances of sensing data when it is determined that the second target is present, as taught by KOBAYASHI et al. for conversation.
Regarding 2, SASAKI et al. does not explicitly teach:
wherein the processor adjusts the range in accordance with the positional relationship, and determines the first position in accordance with the range that has been adjusted,
However, KOBAYASHI et al. teaches:
wherein the processor adjusts the range in accordance with the positional relationship, and determines the first position in accordance with the range that has been adjusted,
(at least figs. 1-5B [0006]-[0056] discussed conversation robot that get into position to have conversation with a group of users, such as user A and user B, or user A and user B and user C; discussed conversation robot rotates head portion 3 to have its front face 3a directed to the main attention target direction/speaker or main listener of the group of users; discussed conversation robot rotates so that the upper body portion 8 so that body front face 8b directed to center of gravity direction line among the users; in particular [0019] discuss speaker in mouth portion 22 of the head portion 21; in particular at least [0024]-[0025], fig. 3 [0026]-[0040] claim 6 discussed conversation robot circuit configuration including various units, microphone, camera; discussed identifying user A and user B as participants in conversation; [0063]; [0054]-[0055] discussed “It is recognized which of the user A and the user B is a speaker, and in accordance with the content of utterance of the user A or the user B, the content of utterance originating from the speaker 49 … can be changed to realize autonomous action corresponding to conversation with the user A and the user B”; in particular at least [0019] discuss speaker in mouth portion 22 of the head portion 21; [0039]-[0040] discussed rotating robot’s head portion and front face to a particular direction; thus as the robot’s head portion and front face are rotated to point to a particular direction, the mouth portion will rotate along; this reads on adjusting the direction of the range) for conversation ([0006]-[0056]);
It would have been obvious to one of ordinary skill in the art at the time of filing and at the time of the invention to modify the system and method of SASAKI et al. with determines whether or not a second target is present in a vicinity of a first target in accordance with wherein the processor adjusts the range in accordance with the positional relationship, and determines the first position in accordance with the range that has been adjusted, as taught by KOBAYASHI et al. for conversation.
Regarding 3, SASAKI et al. does not explicitly teach:
wherein the first position is a position on a front side of the first target and the second target,
However, KOBAYASHI et al. teaches:
wherein the first position is a position on a front side of the first target and the second target,
(at least figs. 1-5B [0006]-[0056] discussed conversation robot that get into position to have conversation with a group of users, such as user A and user B, or user A and user B and user C; discussed conversation robot rotates head portion 3 to have its front face 3a directed to the main attention target direction/speaker or main listener of the group of users; discussed conversation robot rotates so that the upper body portion 8 so that body front face 8b directed to center of gravity direction line among the users; in particular at least [0024]-[0025], fig. 3 [0026]-[0040] claim 6 discussed conversation robot circuit configuration including various units, microphone, camera; discussed identifying user A and user B as participants in conversation; [0063]; [0054]-[0055] discussed “It is recognized which of the user A and the user B is a speaker, and in accordance with the content of utterance of the user A or the user B, the content of utterance originating from the speaker 49 … can be changed to realize autonomous action corresponding to conversation with the user A and the user B”; see figs. 2A-2B; 4A-5B) for conversation ([0006]-[0056]);
It would have been obvious to one of ordinary skill in the art at the time of filing and at the time of the invention to modify the system and method of SASAKI et al. with wherein the first position is a position on a front side of the first target and the second target, as taught by KOBAYASHI et al. for conversation.
Regarding 4, SASAKI et al. does not explicitly teach:
wherein the processor:
obtains body information of the first target and body information of the second target in accordance with at least one of the one or more instances of sensing data, and
determines the first position in accordance with the body information of the first target and the body information of the second target,
However, KOBAYASHI et al. teaches:
wherein the processor:
obtains body information of the first target and body information of the second target in accordance with at least one of the one or more instances of sensing data, and
determines the first position in accordance with the body information of the first target and the body information of the second target,
(at least figs. 1-5B [0006]-[0056] discussed conversation robot that get into position to have conversation with a group of users, such as user A and user B, or user A and user B and user C; discussed conversation robot rotates head portion 3 to have its front face 3a directed to the main attention target direction/speaker or main listener of the group of users; discussed conversation robot rotates so that the upper body portion 8 so that body front face 8b directed to center of gravity direction line among the users; in particular at least [0024]-[0025], fig. 3 [0026]-[0040] claim 6 discussed conversation robot circuit configuration including various units, microphone, camera; discussed identifying user A and user B as participants in conversation; [0063]; [0054]-[0055] discussed “It is recognized which of the user A and the user B is a speaker, and in accordance with the content of utterance of the user A or the user B, the content of utterance originating from the speaker 49 … can be changed to realize autonomous action corresponding to conversation with the user A and the user B”; see figs. 4A-5B; [0028]-[0048] discussed using microphone/audio signals, and moving images to determine face direction and body orientation of user A and user B, then determine main attention direction line and center of gravity line) for conversation ([0006]-[0056]);
It would have been obvious to one of ordinary skill in the art at the time of filing and at the time of the invention to modify the system and method of SASAKI et al. with wherein the processor: obtains body information of the first target and body information of the second target in accordance with at least one of the one or more instances of sensing data, and determines the first position in accordance with the body information of the first target and the body information of the second target, as taught by KOBAYASHI et al. for conversation.
Regarding claim 8, SASAKI et al. teaches:
wherein when it is determined that the second target is present during a period in which sound is being emitted to or collected from the first target, the processor causes the unmanned moving body to move to the first position in a state in which the first target is within the range (at least figs. 1-4, 6 [0010]-[0074] [0086]-[0103] discussed input/output device 100, control device 10, microphone 50/acceptance function, specifying unit 42, a plurality of users U, moving input/output device 100 in direction of user; in particular [0015]-[0022] [0050]-[0055]; in particular [0015]-[0022] [0050]-[0055]; at least [0004]-[0015] discussed smart speaker/input output device 100 receiving sound by users, being controlled to move to position; [0022]-[0024] discussed smart speaker/input output device 100 receiving sound from many users, identify positional relationship between smart speaker/input output device 100 and each user, and moving smart speaker/input output device 100 moves to specified position where each user easily hears sound);
Regarding 11, SASAKI et al. does not explicitly teach:
wherein the second target is a target related to the first target, and
the processor:
obtains at least one of information indicating a relationship with the first target or information indicating a relationship with the unmanned moving body from at least one of the one or more instances of sensing data, and
determines whether or not the second target is present in the vicinity of the first target by determining whether or not a target present in the vicinity of the first target is related to the first target in accordance with the at least one of the information indicating a relationship with the first target or the information indicating a relationship with the unmanned moving body,
However, KOBAYASHI et al. teaches:
the processor:
obtains at least one of information indicating a relationship with the first target or information indicating a relationship with the unmanned moving body from at least one of the one or more instances of sensing data, and
determines whether or not the second target is present in the vicinity of the first target by determining whether or not a target present in the vicinity of the first target is related to the first target in accordance with the at least one of the information indicating a relationship with the first target or the information indicating a relationship with the unmanned moving body,
(at least figs. 1-5B [0006]-[0056] discussed conversation robot that get into position to have conversation with a group of users, such as user A and user B, or user A and user B and user C; discussed conversation robot rotates head portion 3 to have its front face 3a directed to the main attention target direction/speaker or main listener of the group of users; discussed conversation robot rotates so that the upper body portion 8 so that body front face 8b directed to center of gravity direction line among the users; in particular at least [0024]-[0025], fig. 3 [0026]-[0040] claim 6 discussed conversation robot circuit configuration including various units, microphone, camera; discussed identifying user A and user B as participants in conversation; [0063]; [0054]-[0055] discussed “It is recognized which of the user A and the user B is a speaker, and in accordance with the content of utterance of the user A or the user B, the content of utterance originating from the speaker 49 … can be changed to realize autonomous action corresponding to conversation with the user A and the user B”; see figs. 4A-5B; [0028]-[0048] discussed using microphone/audio signals, and moving images to determine face direction and body orientation of user A and user B, then determine main attention direction line and center of gravity line) for conversation ([0006]-[0056]);
It would have been obvious to one of ordinary skill in the art at the time of filing and at the time of the invention to modify the system and method of SASAKI et al. wherein the second target is a target related to the first target, and the processor: obtains at least one of information indicating a relationship with the first target or information indicating a relationship with the unmanned moving body from at least one of the one or more instances of sensing data, and determines whether or not the second target is present in the vicinity of the first target by determining whether or not a target present in the vicinity of the first target is related to the first target in accordance with the at least one of the information indicating a relationship with the first target or the information indicating a relationship with the unmanned moving body, as taught by KOBAYASHI et al. for conversation.
Claim(s) 5-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over SASAKI et al. (JP-2019036174-A a reference in IDS 11/8/2024 with translation provided in the IDS being cited) in view of KOBAYASHI et al. (JP-2012076162-A a reference in IDS 11/8/2024 with translation provided in the IDS being cited) as applied to claim 1 above, and further in view of Ishii et al. (US 20070183618 a reference in IDS 6/16/2021).
Regarding claim 5, SASAKI et al. teaches:
wherein the processor:
determines the first position in accordance with at least one of the age of the first target or the age of the second target (at least figs. 1-4, 6 [0010]-[0074] [0086]-[0103] discussed input/output device 100, control device 10, microphone 50/acceptance function, specifying unit 42, a plurality of users U, moving input/output device 100 in direction of user; in particular [0015]-[0022] [0050]-[0055]; at least [0073] discuss when the age of the user U exceeds a predetermined threshold value, the control device 10 may move the input /output device 100 to a position closer to the user U);
It would have been obvious to one of ordinary skill in the art at the time of filing and at the time of the invention to modify the system and method of FUJIE et al. with determines the first position in accordance with at least one of the age of the first target or the age of the second target as taught by SASAKI et al. to deal with difficult to hear the sound.
SASAKI et al. does not explicitly teach:
estimates at least one of an age of the first target or an age of the second target in accordance with at least one of the one or more instances of sensing data,
However, Ishii et al. teaches:
estimates at least one of an age of the first target or an age of the second target in accordance with at least one of the one or more instances of sensing data,
(at least [0061]-[0066] discussed ultra-directional speaker, discussed “the moving object can identify each visitor's height by using a combination of existing sensors so as to discriminate between children and adults on the basis of height information”) to transmit information ([0061]-[0066]);
It would have been obvious to one of ordinary skill in the art at the time of filing and at the time of the invention to modify the system and method of SASAKI et al. with estimates at least one of an age of the first target or an age of the second target in accordance with at least one of the one or more instances of sensing data as taught by Ishii et al. to transmit information
Regarding claim 6, SASAKI et al. teaches:
wherein the processor determines the first position (at least figs. 1-4, 6 [0010]-[0074] [0086]-[0103] discussed input/output device 100, control device 10, microphone 50/acceptance function, specifying unit 42, a plurality of users U, moving input/output device 100 in direction of user; in particular [0015]-[0022] [0050]-[0055]; in particular [0015]-[0022] [0050]-[0055]; at least [0004]-[0015] discussed smart speaker/input output device 100 receiving sound by users, being controlled to move to position; [0022]-[0024] discussed smart speaker/input output device 100 receiving sound from many users, identify positional relationship between smart speaker/input output device 100 and each user, and moving smart speaker/input output device 100 moves to specified position where each user easily hears sound);
SASAKI et al. does not explicitly teach:
the first position to be a position that does not place a third target unrelated to the first target and the second target within the range,
However, Ishii et al. teaches:
the first position to be a position that does not place a third target unrelated to the first target and the second target within the range,
(at least [0061]-[0066] discussed ultra-directional speaker, discussed “the moving object can identify each visitor's height by using a combination of existing sensors so as to discriminate between children and adults on the basis of height information”, discussed “the moving object can identify each visitor's height by using a combination of existing sensors so as to discriminate between children and adults on the basis of height information, can transmit a voice only to the children from the emitter 44, and can use only the nondirectional speaker 31 for ordinary listeners. As shown in FIG. 14, when there are three adult visitors and two child visitors”, indicating from the robot’s position, emitter 44 only transmit to the 2 children and exclude the adult visitors) to transmit information ([0061]-[0066]);
It would have been obvious to one of ordinary skill in the art at the time of filing and at the time of the invention to modify the system and method of SASAKI et al. with the first position to be a position that does not place a third target unrelated to the first target and the second target within the range as taught by Ishii et al. to transmit information.
Regarding claim 7, SASAKI et al. teaches:
wherein the processor detects a position of an obstruction in accordance with at least one of the one or more instances of sensing data, and determines the first position in accordance with the position of the obstruction (at least figs. 1-4, 6 [0010]-[0074] [0086]-[0103] discussed input/output device 100, control device 10, microphone 50/acceptance function, specifying unit 42, a plurality of users U, moving input/output device 100 in direction of user; in particular [0015]-[0022] [0050]-[0055]; in particular [0015]-[0022] [0050]-[0055]; at least [0004]-[0015] discussed smart speaker/input output device 100 receiving sound by users, being controlled to move to position; at least [0022]-[0023]);
Allowable Subject Matter
Claim 12 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BAO LONG T NGUYEN whose telephone number is (571)270-7768. The examiner can normally be reached M-F 8:30-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached at (571) 272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
BAO LONG T. NGUYEN
Examiner
Art Unit 3664
/BAO LONG T NGUYEN/Primary Examiner, Art Unit 3656