DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
2. This office action is in response to application number 18/901,786 filed on 09/30/2024, in
which claims 1-20 are presented for examination.
Priority
3. Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119
(a)-(d). The certified copy has been filed in priority Application No. KR10-2023-0114914, filed
on 08/30/2023.
Information Disclosure Statement
4. The information disclosure statement (IDS) submitted on 09/30/2024, 12/12/2024, and 10/27/2025 have been received and considered.
Specification
5. Applicant is reminded of the proper language and format for an abstract of the disclosure.
The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details.
The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided.
The abstract of the disclosure is objected to because the abstract is over 150 words. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
Claim Objections
6. Claim 11-15 and 17-20 are objected to because of the following informalities:
Claim 11 reads “plurality of objects comprises…reflectivity information comprises” but should read “plurality of objects further comprises…reflectivity information further comprises”.
Claim 12 reads “plurality of objects comprises” but should read “plurality of objects further comprises”.
Claim 13 reads “candidate directions comprises:” but should read “candidate directions further comprises:”.
Claim 14 reads “priority order information comprises:” but should read “priority order information further comprises:”.
Claim 15 reads “priority order information comprises:” but should read “priority order information further comprises:”.
Claim 17 reads “plurality of objects comprises…reflectivity information comprises” but should read “plurality of objects further comprises…reflectivity information further comprises”.
Claim 18 reads “plurality of objects comprises” but should read “plurality of objects further comprises”.
Claim 19 reads “candidate directions comprises:” but should read “candidate directions further comprises:”.
Claim 20 reads “priority order information comprises:” but should read “priority order information further comprises:”.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
7. Claim(s) 1-2, 4-5, 9-11, 13-14, 16-17, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Xue (CN 117806305 A) in view of (US 11724397 B2) to Jeong et al. (hereinafter Jeong).
Regarding claim 1, Xue discloses A robot comprising: at least one sensor; a speaker; a microphone; a driver; at least one memory storing one or more instructions; and at least one processor configured to execute the one or more instructions, wherein the one or more instructions, when executed by the at least one processor, cause the robot to: (Xue Paragraph 0143: “The robot body may include: The mobile system, the navigation system, the microphone 100, the loudspeaker 101, the visual sensing unit, the calculating unit, and the storage medium.”) (Xue Paragraph 0145: “As shown in FIG. 3, the robot includes a processor 401, a drive system 402 including a driver (such as a mobile component in the embodiment of the present application), The sensor system 403, the wireless communication system 404, and the memory 405 may be connected via one or more communication buses 406.”) (Xue Paragraph 0336: “The method disclosed in the embodiments of the present application may be applied to the processor 903 or implemented by the processor 903”) generate a map comprising information regarding a plurality of objects based on sensing information obtained through the at least one sensor, (Xue Paragraph 0149: “Wherein, the navigation sensor 403b is used for calculating the position of the robot in the space, and is used for generating the operation map of the robot. For example, the navigation sensor 403b may specifically be a dead reckoning sensor, an obstacle detection and avoidance (ODOA) sensor, a positioning and mapping (SLAM) sensor, or the like.”) (Xue Paragraph 0155: “The family service robot is the representative of intelligent interaction and intelligent hardware, taking the sweeper as the representative, capable of moving independently in different rooms, flexibly avoiding obstacles, building an environment layout map and executing the navigation task based on position.”) […] based on receiving a user voice through the microphone, obtain information on an intensity of the user voice for each of a plurality of directions, obtain information on a plurality of candidate directions from which the user voice is received from among the plurality of directions based on the information on the intensity of the user voice for each of the plurality of directions, (Xue Paragraph 0014: “In one possible implementation, the first location comprises: determining a plurality of first candidate positions according to the first speech;”) (Xue Paragraph 0143: “The visual sensing unit can realize the detection of the specific target, calculate the distance from the specific target to the robot, and the microphone array device can specifically include the microphone array 105 and the loudspeaker 106. It should be understood that the 105 wheat array can also be the same wheat array as the 100 wheat array, which are all arranged on the robot.”) (Xue Paragraph 0213: “Due to the multi-path effect of the room sound transmission, the sound of the same sound source is directly transmitted by the wheat array, and some of the sound is reflected by the wall body to reach the wheat array, so the wheat array can detect the potential orientation of multiple sound sources. ”) (Xue Paragraph 0215: “The sound propagation reflection path is complex in an indoor environment, and the sound source localization method can generally give several potential positions (and intensities) of the sound source.”) (Note: Position is based on direction) obtain priority order information for the plurality of candidate directions based on a position of the robot (Xue Paragraph 0085: “The mobile control module is specifically configured to:”) (Xue Paragraph 0086: “according to the target sequence, orderly moving to the multiple second candidate positions by controlling the moving component until moving to the correct candidate position in the multiple candidate positions.”) (Xue Paragraph 0087: “In one possible implementation, the target order is related to at least one of the following:”) (Xue Paragraph 0088: “a passing path length between the current position of the third device and each second candidate position of the plurality of second candidate positions;”) […] and obtain information on a direction in which the user voice is uttered from among the plurality of candidate directions based on the priority order information. (Xue Paragraph 0017: “In one possible implementation, the target order is related to at least one of the following: a passing path length between the current position of the third device and each second candidate position of the plurality of second candidate positions; The confidence level of each first candidate position in the plurality of first candidate positions is carried in the first information.”) (Xue Paragraph 0018: “In the embodiment of the application, it can calculate the sound source orientation heuristic search cost, the robot sorts the potential user area, preferentially navigates to the user potential area with high confidence, The area close to the distance can reduce the moving path cost and time cost on the premise of guaranteeing the correct moving to the area where the user is located.”) (Xue Paragraph 0333: “Referring to FIG. 9, FIG. 9 is a schematic diagram of an implementation device according to an embodiment of the present application, and the implementation device 900 may be embodied as the first device, the second device or the third device”) (The chip may be a first device, a second device or a third device described in the above embodiments to perform the steps related to the device control method in the above embodiments.”)
Xue does not disclose teach […] generate ultrasonic waves toward each of the plurality of objects through the speaker, obtain reflectivity information regarding the plurality of objects based on reflected sounds reflected from each of the objects and received through the microphone, and store the reflectivity information, the reflected sounds reflected from each of the objects being at least a portion of the ultrasonic waves reflected from each of the objects, […] and the stored reflectivity information,
However, Jeong does teach […] generate ultrasonic waves toward each of the plurality of objects through the speaker, obtain reflectivity information regarding the plurality of objects based on reflected sounds reflected from each of the objects and received through the microphone, and store the reflectivity information, the reflected sounds reflected from each of the objects being at least a portion of the ultrasonic waves reflected from each of the objects, […] and the stored reflectivity information, (Jeong Column 6, line number 34-44: “However, when a sound signal is reflected off of an obstacle and the reflected sound waves are then received at the plurality of microphones 123 (123a to 123d), it may be difficult for the robot 100 to locate the direction from which the sound originates. For this reason, the accuracy of recognizing the sound is also lowered. As such, the robot 100 according to various embodiments of the present disclosure may detect distortions of sound signals resulting from reflections or deflections caused by obstacles, and may reduce the influence of the distorted sound signal, thereby more accurately locating the sound source and recognizing sounds.”) (Jeong Column 9, line number 49-53: “Meanwhile, when the sound information outputted from the sound source and the direction information on the sound signals are received at the robot 100, the controller 190 may store, in the memory 150, a label presenting the direction information on the sound source”) (Jeong Column 11, line number 45-47: “In an embodiment, the controller 190 may estimate the occupancy area of the obstacle by using the speaker 143 by outputting high-frequency sound information.”) (Jeong Column 13, line number 66- Column 14, line number 3: “The robot 100 may apply an echo cancellation algorithm to obtain a high-frequency sound signal reflected from the obstacle S1130, and may perform filtering (band-pass filtering) to isolate the high-frequency sound signal in the obtained sound signal S114”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Xue to include […] generate ultrasonic waves toward each of the plurality of objects through the speaker, obtain reflectivity information regarding the plurality of objects based on reflected sounds reflected from each of the objects and received through the microphone, and store the reflectivity information, the reflected sounds reflected from each of the objects being at least a portion of the ultrasonic waves reflected from each of the objects, […] and the stored reflectivity information, taught by Jeong. This would have been for the benefit to provide First, when sound signals are received through a plurality of microphones, since a sound signal distorted by an obstacle may be detected and properly processed, it is possible to minimize the influence of the sound signal distorted by the obstacle. Second, since a direction of a sound source that generates a sound may be estimated, it is possible to improve direction detection and beamforming performance of a robot. [Jeong Column 2, line number 36-43]
Regarding claim 2, Xue in view of Jeong teaches claim 1, accordingly, the rejection of claim 1 is incorporated above.
Xue does not disclose The robot of claim 1, wherein the one or more instructions, when executed by the at least one processor, further cause the robot to: generate ultrasonic waves at preset distance intervals with respect to a wall object among the plurality of objects, and obtain reflectivity information regarding the wall object based on reflected sounds reflected from the wall object and received through the microphone, the reflected sounds reflected from the wall object being at least a portion of the ultrasonic waves generated at the preset intervals reflected from the wall object.
However, Jeong does teach The robot of claim 1, wherein the one or more instructions, when executed by the at least one processor, further cause the robot to: (Jeong Column 15, line number 37-39: “The present disclosure described above may be implemented as a computer-readable code in a medium on which a program is recorded.”) (Jeong Column 15, line number 46-47: “In addition, the computer may include the processor 190 of the robot 100.”) generate ultrasonic waves at preset distance intervals with respect to a wall object among the plurality of objects, and obtain reflectivity information regarding the wall object based on reflected sounds reflected from the wall object and received through the microphone, the reflected sounds reflected from the wall object being at least a portion of the ultrasonic waves generated at the preset intervals reflected from the wall object. (Jeong Column 8, line number 1-2: “The robot 100 may include a transceiver 110, an input interface 120, a sensor 130, an output interface 140,”) (Jeong Column 8, line number 51-57: “The sensor 130 may include, for example, hardware based sensors such as a satellite-based location receiving sensor, a distance detection sensor, a connector connection detection sensor, an illumination sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an infrared (IR) sensor, a finger scan sensor, an ultrasonic sensor,”) (Jeong Column 9, line number 5-12: “ The output interface 140 may generate a visual, auditory, or haptic related output. In addition, the output interface 140 may include, for example, hardware based outputs such as an optical output interface, that is the display 141, for outputting visual information, and a speaker 143 for outputting auditory information. The speaker 143 may output audible frequency sound information and high-frequency sound information. ”) (Jeong Column 10, line number 57-65: “Here, the predetermined range may include a distance range in which the sound signals received by the plurality of microphones 123 are distorted or a in which sound is absorbed by the specific obstacle by a predetermined level. However, when the obstacle is a wall and the robot 100 is within a predetermined distance, such as 30 cm, from the wall, the controller 190 may determine or it may be preset that the problem (sound distortion or sound absorption) is likely to occur.”) (Jeong Column 11, line number 45-47: “In an embodiment, the controller 190 may estimate the occupancy area of the obstacle by using the speaker 143 by outputting high-frequency sound information.”) (Jeong Column 11, line number 48-49: “Thus, when the high-frequency sound signals are received through the plurality of microphones 123”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Xue to include The robot of claim 1, wherein the one or more instructions, when executed by the at least one processor, further cause the robot to: generate ultrasonic waves at preset distance intervals with respect to a wall object among the plurality of objects, and obtain reflectivity information regarding the wall object based on reflected sounds reflected from the wall object and received through the microphone, the reflected sounds reflected from the wall object being at least a portion of the ultrasonic waves generated at the preset intervals reflected from the wall object taught by Jeong. This would have been for the benefit to provide First, when sound signals are received through a plurality of microphones, since a sound signal distorted by an obstacle may be detected and properly processed, it is possible to minimize the influence of the sound signal distorted by the obstacle. Second, since a direction of a sound source that generates a sound may be estimated, it is possible to improve direction detection and beamforming performance of a robot. [Jeong Column 2, line number 36-43]
Regarding claim 4, Xue discloses The robot of claim 1, wherein the one or more instructions, when executed by the at least one processor, further cause the robot to: obtain the information on the intensity of the user voice for each of the plurality of directions based on the position of the robot, (Xue Paragraph 0215: “The sound propagation reflection path is complex in an indoor environment, and the sound source localization method can generally give several potential positions (and intensities) of the sound source.”) (Xue Paragraph 0317: “In one possible implementation, the target order is related to at least one of the following:”) (Xue Paragraph 0318: “a passing path length between the current position of the third device”) and identify as the plurality of candidate directions a preset number of directions among the plurality of directions in which the respective intensity of the user voice exceeds a predetermined threshold. (Xue Paragraph 0014: “In one possible implementation, the first location comprises: determining a plurality of first candidate positions according to the first speech;”) (Xue paragraph 0213: “Due to the multi-path effect of the room sound transmission, the sound of the same sound source is directly transmitted by the wheat array, and some of the sound is reflected by the wall body to reach the wheat array, so the wheat array can detect the potential orientation of multiple sound sources.”) (Xue Paragraph 0229: “In the embodiment, the sound source orientation greater than the intensity threshold is collected according to the robot main body microphone array as the potential orientation of the user, and the search cost is calculated according to the navigation distance and the confidence intensity. In the current technology, only the maximum intensity direction is taken, and the target navigation point is selected with a fixed threshold interval to explore;”)
Regarding claim 5, Xue discloses The robot of claim 1, wherein the one or more instructions, when executed by the at least one processor, further cause the robot to: identify objects, among the plurality of objects, positioned in the plurality of candidate directions relative to the position of the robot, (Xue Paragraph 0085: “The mobile control module is specifically configured to:”) (Xue Paragraph 0086: “according to the target sequence, orderly moving to the multiple second candidate positions by controlling the moving component until moving to the correct candidate position in the multiple candidate positions.”) (Xue Paragraph 0149: “Wherein, the navigation sensor 403b is used for calculating the position of the robot in the space, and is used for generating the operation map of the robot. For example, the navigation sensor 403b may specifically be a dead reckoning sensor, an obstacle detection and avoidance (ODOA) sensor, a positioning and mapping (SLAM) sensor, or the like.”) and identify a priority order with respect to the plurality of candidate directions based on the respective information on the intensity of the user voice corresponding to each of the plurality of candidate directions (Xue Paragraph 0085: “The mobile control module is specifically configured to:”) (Xue Paragraph 0086: “according to the target sequence, orderly moving to the multiple second candidate positions by controlling the moving component until moving to the correct candidate position in the multiple candidate positions.”) (Xue Paragraph 0213: “Because the sound signal has different degrees of loss through different propagation paths, the signal intensity (Ii) of the potential orientation (Ai) of each sound source has a certain positive correlation with the confidence. combining the sound source azimuth A whose azimuth is less than the preset azimuth deviation threshold, and calculating the confidence interval R of the sound source azimuth according to the signal intensity threshold T”) (Xue Paragraph 0220: “In one possible implementation, the target order is related to at least one of the following: a passing path length between the current position of the third device and each second candidate position of the plurality of second candidate positions; The confidence level of each first candidate position in the plurality of first candidate positions is carried in the first information.”)
Xue does not disclose […] and reflectivity information corresponding to the identified objects.
However, Jeong does teach […] and reflectivity information corresponding to the identified objects. (Jeong Column 11, line number 10-12: “In response to the obstacle being found and identified, the controller 190 may estimate the occupancy area of the obstacle in the space.”) (Jeong Column 11, line number 45-47: “In an embodiment, the controller 190 may estimate the occupancy area of the obstacle by using the speaker 143 by outputting high-frequency sound information.”) (Jeong Column 11, line number 48-49: “Thus, when the high-frequency sound signals are received through the plurality of microphones 123”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Xue to include […] and reflectivity information corresponding to the identified objects taught by Jeong. This would have been for the benefit to provide First, when sound signals are received through a plurality of microphones, since a sound signal distorted by an obstacle may be detected and properly processed, it is possible to minimize the influence of the sound signal distorted by the obstacle. Second, since a direction of a sound source that generates a sound may be estimated, it is possible to improve direction detection and beamforming performance of a robot. [Jeong Column 2, line number 36-43]
Regarding claim 9, Xue in view of Jeong teaches claim 1, accordingly, the rejection of claim 1 is incorporated above.
Xue does not disclose The robot of claim 1, wherein the one or more instructions, when executed by the at least one processor, further cause the robot to: perform voice recognition on the user voice by performing beam forming in the direction in which the user voice is uttered.
However, Jeong does teach The robot of claim 1, wherein the one or more instructions, when executed by the at least one processor, further cause the robot to: perform voice recognition on the user voice by performing beam forming in the direction in which the user voice is uttered. (Jeong Column 12, line number 34-39: “In addition, when the sound information from the sound source is speech information, the controller 190 may amplify the sound signal from the sound source by performing beamforming in a direction of the sound source, and as a result, may improve recognition of the speech information based on the amplified sound signal.”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Xue to include The robot of claim 1, wherein the one or more instructions, when executed by the at least one processor, further cause the robot to: perform voice recognition on the user voice by performing beam forming in the direction in which the user voice is uttered taught by Jeong. This would have been for the benefit to provide First, when sound signals are received through a plurality of microphones, since a sound signal distorted by an obstacle may be detected and properly processed, it is possible to minimize the influence of the sound signal distorted by the obstacle. Second, since a direction of a sound source that generates a sound may be estimated, it is possible to improve direction detection and beamforming performance of a robot. [Jeong Column 2, line number 36-43]
Regarding claim 10, Xue discloses A method for controlling a robot, (Xue Paragraph 0007: “In a first aspect, the present application provides a device control method, which is applied to a first device”) the method comprising: generating a map comprising information regarding a plurality of objects based on sensing information obtained through at least one sensor of the robot; (Xue Paragraph 0149: “Wherein, the navigation sensor 403b is used for calculating the position of the robot in the space, and is used for generating the operation map of the robot. For example, the navigation sensor 403b may specifically be a dead reckoning sensor, an obstacle detection and avoidance (ODOA) sensor, a positioning and mapping (SLAM) sensor, or the like.”) (Xue Paragraph 0155: “The family service robot is the representative of intelligent interaction and intelligent hardware, taking the sweeper as the representative, capable of moving independently in different rooms, flexibly avoiding obstacles, building an environment layout map and executing the navigation task based on position.”) […] based on a user voice being received through the microphone, obtaining information on an intensity of the user voice for each of a plurality of directions; obtaining information on a plurality of candidate directions from which the user voice is received from among the plurality of directions based on the information on the intensity of the user voice for each of the plurality of directions; (Xue Paragraph 0014: “In one possible implementation, the first location comprises: determining a plurality of first candidate positions according to the first speech;”) (Xue Paragraph 0143: “The visual sensing unit can realize the detection of the specific target, calculate the distance from the specific target to the robot, and the microphone array device can specifically include the microphone array 105 and the loudspeaker 106. It should be understood that the 105 wheat array can also be the same wheat array as the 100 wheat array, which are all arranged on the robot.”) (Xue Paragraph 0213: “Due to the multi-path effect of the room sound transmission, the sound of the same sound source is directly transmitted by the wheat array, and some of the sound is reflected by the wall body to reach the wheat array, so the wheat array can detect the potential orientation of multiple sound sources. ”) (Xue Paragraph 0215: “The sound propagation reflection path is complex in an indoor environment, and the sound source localization method can generally give several potential positions (and intensities) of the sound source.”) (Note: Position is based on direction) obtaining priority order information for the plurality of candidate directions (Xue Paragraph 0085: “The mobile control module is specifically configured to:”) (Xue Paragraph 0086: “according to the target sequence, orderly moving to the multiple second candidate positions by controlling the moving component until moving to the correct candidate position in the multiple candidate positions.”) (Xue Paragraph 0087: “In one possible implementation, the target order is related to at least one of the following:”) (Xue Paragraph 0088: “a passing path length between the current position of the third device and each second candidate position of the plurality of second candidate positions;”) […] and obtaining information on a direction in which the user voice is uttered from among the plurality of candidate directions based on the priority order information. (Xue Paragraph 0017: “In one possible implementation, the target order is related to at least one of the following: a passing path length between the current position of the third device and each second candidate position of the plurality of second candidate positions; The confidence level of each first candidate position in the plurality of first candidate positions is carried in the first information.”) (Xue Paragraph 0018: “In the embodiment of the application, it can calculate the sound source orientation heuristic search cost, the robot sorts the potential user area, preferentially navigates to the user potential area with high confidence, The area close to the distance can reduce the moving path cost and time cost on the premise of guaranteeing the correct moving to the area where the user is located.”) (Xue Paragraph 0333: “Referring to FIG. 9, FIG. 9 is a schematic diagram of an implementation device according to an embodiment of the present application, and the implementation device 900 may be embodied as the first device, the second device or the third device”) (The chip may be a first device, a second device or a third device described in the above embodiments to perform the steps related to the device control method in the above embodiments.”)
Xue does not disclose […] generating ultrasonic waves toward each of the plurality of objects through a speaker of the robot, obtaining reflectivity information regarding the plurality of objects based on the reflected sounds reflected from each of the objects and received a microphone of the robot, and storing the reflectivity information, the reflected sounds reflected from each of the objects being at least a portion of the ultrasonic waves reflected from each of the objects; […] and the stored reflectivity information,
However, Jeong does teach […] generate ultrasonic waves toward each of the plurality of objects through the speaker, obtain reflectivity information regarding the plurality of objects based on reflected sounds reflected from each of the objects and received through the microphone, and store the reflectivity information, the reflected sounds reflected from each of the objects being at least a portion of the ultrasonic waves reflected from each of the objects, […] and the stored reflectivity information, (Jeong Column 6, line number 34-44: “However, when a sound signal is reflected off of an obstacle and the reflected sound waves are then received at the plurality of microphones 123 (123a to 123d), it may be difficult for the robot 100 to locate the direction from which the sound originates. For this reason, the accuracy of recognizing the sound is also lowered. As such, the robot 100 according to various embodiments of the present disclosure may detect distortions of sound signals resulting from reflections or deflections caused by obstacles, and may reduce the influence of the distorted sound signal, thereby more accurately locating the sound source and recognizing sounds.”) (Jeong Column 9, line number 49-53: “Meanwhile, when the sound information outputted from the sound source and the direction information on the sound signals are received at the robot 100, the controller 190 may store, in the memory 150, a label presenting the direction information on the sound source”) (Jeong Column 11, line number 45-47: “In an embodiment, the controller 190 may estimate the occupancy area of the obstacle by using the speaker 143 by outputting high-frequency sound information.”) (Jeong Column 13, line number 66- Column 14, line number 3: “The robot 100 may apply an echo cancellation algorithm to obtain a high-frequency sound signal reflected from the obstacle S1130, and may perform filtering (band-pass filtering) to isolate the high-frequency sound signal in the obtained sound signal S114”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Xue to include […] generate ultrasonic waves toward each of the plurality of objects through the speaker, obtain reflectivity information regarding the plurality of objects based on reflected sounds reflected from each of the objects and received through the microphone, and store the reflectivity information, the reflected sounds reflected from each of the objects being at least a portion of the ultrasonic waves reflected from each of the objects, […] and the stored reflectivity information, taught by Jeong. This would have been for the benefit to provide First, when sound signals are received through a plurality of microphones, since a sound signal distorted by an obstacle may be detected and properly processed, it is possible to minimize the influence of the sound signal distorted by the obstacle. Second, since a direction of a sound source that generates a sound may be estimated, it is possible to improve direction detection and beamforming performance of a robot. [Jeong Column 2, line number 36-43]
Regarding claim 11, Xue in view of Jeong teaches claim 10, accordingly, the rejection of claim 10 is incorporated above.
Xue does not disclose The method of claim 10, wherein the generating ultrasonic waves toward each of the plurality of objects comprises generating ultrasonic waves at preset distance intervals with respect to a wall object among the plurality of objects, and wherein the obtaining the reflectivity information comprises obtaining reflectivity information regarding the wall object based on reflected sounds reflected from the wall object and received through the microphone, the reflected sounds reflected from the wall object being at least a portion of the ultrasonic waves generated at the preset intervals reflected from the wall object.
However, Jeong does teach The method of claim 10, wherein generating ultrasonic waves toward each of the plurality of objects comprises generating ultrasonic waves at preset distance intervals with respect to a wall object among the plurality of objects, and wherein the obtaining the reflectivity information comprises obtaining reflectivity information regarding the wall object based on reflected sounds reflected from the wall object and received through the microphone, the reflected sounds reflected from the wall object being at least a portion of the ultrasonic waves generated at the preset intervals reflected from the wall object. (Jeong Column 8, line number 1-2: “The robot 100 may include a transceiver 110, an input interface 120, a sensor 130, an output interface 140,”) (Jeong Column 9, line number 5-12: “ The output interface 140 may generate a visual, auditory, or haptic related output. In addition, the output interface 140 may include, for example, hardware based outputs such as an optical output interface, that is the display 141, for outputting visual information, and a speaker 143 for outputting auditory information. The speaker 143 may output audible frequency sound information and high-frequency sound information. ”) (Jeong Column 10, line number 57-65: “Here, the predetermined range may include a distance range in which the sound signals received by the plurality of microphones 123 are distorted or a in which sound is absorbed by the specific obstacle by a predetermined level. However, when the obstacle is a wall and the robot 100 is within a predetermined distance, such as 30 cm, from the wall, the controller 190 may determine or it may be preset that the problem (sound distortion or sound absorption) is likely to occur.”) (Jeong Column 11, line number 45-47: “In an embodiment, the controller 190 may estimate the occupancy area of the obstacle by using the speaker 143 by outputting high-frequency sound information.”) (Jeong Column 11, line number 48-49: “Thus, when the high-frequency sound signals are received through the plurality of microphones 123”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Xue to include The method of claim 10, wherein generating ultrasonic waves toward each of the plurality of objects comprises generating ultrasonic waves at preset distance intervals with respect to a wall object among the plurality of objects, and wherein the obtaining the reflectivity information comprises obtaining reflectivity information regarding the wall object based on reflected sounds reflected from the wall object and received through the microphone, the reflected sounds reflected from the wall object being at least a portion of the ultrasonic waves generated at the preset intervals reflected from the wall object taught by Jeong. This would have been for the benefit to provide First, when sound signals are received through a plurality of microphones, since a sound signal distorted by an obstacle may be detected and properly processed, it is possible to minimize the influence of the sound signal distorted by the obstacle. Second, since a direction of a sound source that generates a sound may be estimated, it is possible to improve direction detection and beamforming performance of a robot. [Jeong Column 2, line number 36-43]
Regarding claim 13, Xue discloses The method of claim 10, wherein the obtaining information on the plurality of candidate directions comprises: identifying as the plurality of candidate directions a preset number of directions from among the plurality of directions in which the respective intensity of the user voice exceeds a predetermined threshold. (Xue Paragraph 0014: “In one possible implementation, the first location comprises: determining a plurality of first candidate positions according to the first speech;”) (Xue paragraph 0213: “Due to the multi-path effect of the room sound transmission, the sound of the same sound source is directly transmitted by the wheat array, and some of the sound is reflected by the wall body to reach the wheat array, so the wheat array can detect the potential orientation of multiple sound sources.”) (Xue Paragraph 0215: “The sound propagation reflection path is complex in an indoor environment, and the sound source localization method can generally give several potential positions (and intensities) of the sound source.”) (Xue Paragraph 0229: “In the embodiment, the sound source orientation greater than the intensity threshold is collected according to the robot main body microphone array as the potential orientation of the user, and the search cost is calculated according to the navigation distance and the confidence intensity. In the current technology, only the maximum intensity direction is taken, and the target navigation point is selected with a fixed threshold interval to explore;”)
Regarding claim 14, Xue discloses The method of claim 10, wherein the obtaining the priority order information comprises: identifying objects among the plurality of objects positioned in the plurality of candidate directions relative to the position of the robot; (Xue Paragraph 0085: “The mobile control module is specifically configured to:”) (Xue Paragraph 0086: “according to the target sequence, orderly moving to the multiple second candidate positions by controlling the moving component until moving to the correct candidate position in the multiple candidate positions.”) (Xue Paragraph 0149: “Wherein, the navigation sensor 403b is used for calculating the position of the robot in the space, and is used for generating the operation map of the robot. For example, the navigation sensor 403b may specifically be a dead reckoning sensor, an obstacle detection and avoidance (ODOA) sensor, a positioning and mapping (SLAM) sensor, or the like.”) and identifying a priority order with respect to the plurality of candidate directions based on the respective information on the intensity of the user voice corresponding to each of the plurality of candidate directions (Xue Paragraph 0085: “The mobile control module is specifically configured to:”) (Xue Paragraph 0086: “according to the target sequence, orderly moving to the multiple second candidate positions by controlling the moving component until moving to the correct candidate position in the multiple candidate positions.”) (Xue Paragraph 0213: “Because the sound signal has different degrees of loss through different propagation paths, the signal intensity (Ii) of the potential orientation (Ai) of each sound source has a certain positive correlation with the confidence. combining the sound source azimuth A whose azimuth is less than the preset azimuth deviation threshold, and calculating the confidence interval R of the sound source azimuth according to the signal intensity threshold T”) (Xue Paragraph 0220: “In one possible implementation, the target order is related to at least one of the following: a passing path length between the current position of the third device and each second candidate position of the plurality of second candidate positions; The confidence level of each first candidate position in the plurality of first candidate positions is carried in the first information.”)
Xue does not disclose […] and reflectivity information corresponding to the identified objects.
However, Jeong does teach […] and reflectivity information corresponding to the identified objects. (Jeong Column 11, line number 10-12: “In response to the obstacle being found and identified, the controller 190 may estimate the occupancy area of the obstacle in the space.”) (Jeong Column 11, line number 45-47: “In an embodiment, the controller 190 may estimate the occupancy area of the obstacle by using the speaker 143 by outputting high-frequency sound information.”) (Jeong Column 11, line number 48-49: “Thus, when the high-frequency sound signals are received through the plurality of microphones 123”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Xue to include […] and reflectivity information corresponding to the identified objects taught by Jeong. This would have been for the benefit to provide First, when sound signals are received through a plurality of microphones, since a sound signal distorted by an obstacle may be detected and properly processed, it is possible to minimize the influence of the sound signal distorted by the obstacle. Second, since a direction of a sound source that generates a sound may be estimated, it is possible to improve direction detection and beamforming performance of a robot. [Jeong Column 2, line number 36-43]
Regarding claim 16, Xue discloses A non-transitory computer readable medium having instructions stored therein, which when executed by at least one processor cause the at least one processor to execute a method of controlling a robot, the method comprising: (Xue Paragraph 0342: “invention further provides a computer readable storage medium, the computer readable storage medium is stored with a program for signal processing, when it is run on the computer, the computer executes the steps executed by the executing device, or enabling the computer to execute the steps executed by the device control device.”) generating a map comprising information regarding a plurality of objects based on sensing information obtained through at least one sensor of the robot; (Xue Paragraph 0149: “Wherein, the navigation sensor 403b is used for calculating the position of the robot in the space, and is used for generating the operation map of the robot. For example, the navigation sensor 403b may specifically be a dead reckoning sensor, an obstacle detection and avoidance (ODOA) sensor, a positioning and mapping (SLAM) sensor, or the like.”) (Xue Paragraph 0155: “The family service robot is the representative of intelligent interaction and intelligent hardware, taking the sweeper as the representative, capable of moving independently in different rooms, flexibly avoiding obstacles, building an environment layout map and executing the navigation task based on position.”) […] based on a user voice being received through the microphone, obtaining information on an intensity of the user voice for each of a plurality of directions; obtaining information on a plurality of candidate directions from which the user voice is received from among the plurality of directions based on the information on the intensity of the user voice for each of the plurality of directions; (Xue Paragraph 0014: “In one possible implementation, the first location comprises: determining a plurality of first candidate positions according to the first speech;”) (Xue Paragraph 0143: “The visual sensing unit can realize the detection of the specific target, calculate the distance from the specific target to the robot, and the microphone array device can specifically include the microphone array 105 and the loudspeaker 106. It should be understood that the 105 wheat array can also be the same wheat array as the 100 wheat array, which are all arranged on the robot.”) (Xue Paragraph 0213: “Due to the multi-path effect of the room sound transmission, the sound of the same sound source is directly transmitted by the wheat array, and some of the sound is reflected by the wall body to reach the wheat array, so the wheat array can detect the potential orientation of multiple sound sources. ”) (Xue Paragraph 0215: “The sound propagation reflection path is complex in an indoor environment, and the sound source localization method can generally give several potential positions (and intensities) of the sound source.”) (Note: Position is based on direction) obtaining priority order information for the plurality of candidate directions based on a position of the robot (Xue Paragraph 0085: “The mobile control module is specifically configured to:”) (Xue Paragraph 0086: “according to the target sequence, orderly moving to the multiple second candidate positions by controlling the moving component until moving to the correct candidate position in the multiple candidate positions.”) (Xue Paragraph 0087: “In one possible implementation, the target order is related to at least one of the following:”) (Xue Paragraph 0088: “a passing path length between the current position of the third device and each second candidate position of the plurality of second candidate positions;”) […] and obtaining information on a direction in which the user voice is uttered from among the plurality of candidate directions based on the priority order information. (Xue Paragraph 0017: “In one possible implementation, the target order is related to at least one of the following: a passing path length between the current position of the third device and each second candidate position of the plurality of second candidate positions; The confidence level of each first candidate position in the plurality of first candidate positions is carried in the first information.”) (Xue Paragraph 0018: “In the embodiment of the application, it can calculate the sound source orientation heuristic search cost, the robot sorts the potential user area, preferentially navigates to the user potential area with high confidence, The area close to the distance can reduce the moving path cost and time cost on the premise of guaranteeing the correct moving to the area where the user is located.”) (Xue Paragraph 0333: “Referring to FIG. 9, FIG. 9 is a schematic diagram of an implementation device according to an embodiment of the present application, and the implementation device 900 may be embodied as the first device, the second device or the third device”) (The chip may be a first device, a second device or a third device described in the above embodiments to perform the steps related to the device control method in the above embodiments.”)
Xue does not disclose teach […] generating ultrasonic waves toward each of the plurality of objects through a speaker of the robot, obtaining reflectivity information regarding the plurality of objects based on the reflected sounds reflected from each of the objects and received a microphone of the robot, and storing the reflectivity information, the reflected sounds reflected from each of the objects being at least a portion of the ultrasonic waves reflected from each of the objects; […] and the stored reflectivity information;
Howev