Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This is the first Office action on the merits. Claims 1-10 are currently pending and addressed below.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/19/2024 has been received. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 10 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The broadest reasonable interpretation of a claim drawn to a computer readable medium typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. A claim drawn to such a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 U.S. C. 101 by adding the limitation "non-transitory" to the claim. The Examiner suggests the claim reads "A non-transitory computer program product" or "A non-transitory computer readable medium".
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1 and 3-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ebrahimi Afrouzi et al. (US 20220066456 A1), hereinafter Ebrahimi Afrouzi in view of Watanabe et al. (US 20170357242 A1), hereinafter Watanabe and Perkins et al. (US 20240061428 A1), hereinafter Perkins.
Regarding claim 1, Ebrahimi Afrouzi teaches:
1. A method of collision detection braking control in a serving robot, which is performed by a computing device including at least one processor, the method comprising:
… related to collision detection of a serving robot; (Paragraph 0307, "In embodiments, deep learning may be used to improve perception, improve trajectory such that it follows the planned path more accurately, improve coverage, improve obstacle detection and collision prevention, improve decision making such that it is more human-like, improve decision making in situation wherein some data is missing, etc.")
…
monitoring the movement state of the serving robot (Paragraph 0421, "In some embodiments, the processor of the robot may track the position of the robot as the robot moves from a known state to a next discrete state. The next discrete state may be a state within one or more layers of superimposed Cartesian (or other type) coordinate system, wherein some ordered pairs may be marked as possible obstacles. In some embodiments, the processor may use an inverse measurement model when filling obstacle data into the coordinate system to indicate obstacle occupancy, free space, or probability of obstacle occupancy. In some embodiments, the processor of the robot may determine an uncertainty of the pose of the robot and the state space surrounding the robot. In some embodiments, the processor of the robot may use a Markov assumption, wherein each state is a complete summary of the past and used to determine the next state of the robot. In some embodiments, the processor may use a probability distribution to estimate a state of the robot since state transitions occur by actuations that are subject to uncertainties, such as slippage (e.g., slippage while driving on carpet, low-traction flooring, slopes, and over obstacles such as cords and cables). In some embodiments, the probability distribution may be determined based on readings collected by sensors of the robot. In some embodiments, the processor may use an Extended Kalman Filter for non-linear problems. In some embodiments, the processor of the robot may use an ensemble consisting of a large number of virtual copies of the robot, each virtual copy representing a possible state that the real robot is in. In embodiments, the processor may maintain, increase, or decrease the size of the ensemble as needed. In embodiments, the processor may renew, weaken, or strengthen the virtual copy members of the ensemble. In some embodiments, the processor may identify a most feasible member and one or more feasible successors of the most feasible member. In some embodiments, the processor may use maximum likelihood methods to determine the most likely member to correspond with the real robot at each point in time.") and a motor current value of the serving robot; (Paragraph 0406, "In some embodiments, a map of the environment is separately built from the obstacle map. In some embodiments, an obstacle map is divided into two categories, moving and stationary obstacle maps. In some embodiments, the processor separately builds and maintains each type of obstacle map. In some embodiments, the processor of the robot may detect an obstacle based on an increase in electrical current drawn by a wheel or brush or other component motor. For example, when stuck on an object, the brush motor may draw more current as it experiences resistance cause by impact against the object. In some embodiments, the processor superimposes the obstacle maps with moving and stationary obstacles to form a complete perception of the environment." as well as Paragraph 1232, "In some embodiments, the processor of the robot uses real-time environmental sensor data (or environmental characteristics inferred therefrom) or environmental sensor data aggregated from different working sessions or information from the aggregate map of the environment to dynamically adjust the speed of components and/or activate/deactivate functions of the robot during operation in an environment. For example, an electrical current sensor may be used to measure the amount of current drawn by a motor of a main brush in real-time. The processor may infer the type of driving surface based on the amount current drawn and in response adjusts the speed of components such that they are ideal for the particular driving surface type. For instance, if the current drawn by the motor of the main brush is high, the processor may infer that a robotic vacuum is on carpet, as more power is required to rotate the main brush at a particular speed on carpet as compared to hard flooring (e.g., wood or tile). In response to inferring carpet, the processor may increase the speed of the main brush and impeller (or increase applied torque without changing speed, or increase speed and torque) and reduce the speed of the wheels for a deeper cleaning. Some embodiments may raise or lower a brush in response to a similar inference, e.g., lowering a brush to achieve a deeper clean. In a similar manner, an electrical current sensor that measures the current drawn by a motor of a wheel may be used to predict the type of driving surface, as carpet or grass, for example, requires more current to be drawn by the motor to maintain a particular speed as compared to hard driving surface. In some embodiments, the processor aggregates motor current measured during different working sessions and determines adjustments to speed of components using the aggregated data. In another example, a distance sensor takes distance measurements and the processor infers the type of driving surface using the distance measurements. For instance, the processor infers the type of driving surface from distance measurements of a time-of-flight (“TOF”) sensor positioned on, for example, the bottom surface of the robot as a hard driving surface when, for example, when consistent distance measurements are observed over time (to within a threshold) and soft driving surface when irregularity in readings are observed due to the texture of for example, carpet or grass. In a further example, the processor uses sensor readings of an image sensor with at least one IR illuminator or any other structured light positioned on the bottom side of the robot to infer type of driving surface. The processor observes the signals to infer type of driving surface. For example, driving surfaces such as carpet or grass produce more distorted and scattered signals as compared with hard driving surfaces due to their texture. The processor may use this information to infer the type of driving surface.") and
controlling the serving robot to brake for the braking time (Paragraph 1237, "In some embodiments, the processor may use machine learning techniques to predict environmental characteristics using sensor data such that adjustments to speed of components of the robot may be made autonomously and in real-time to accommodate the current environment. In some embodiments, Bayesian methods may be used in predicting environmental characteristics. For example, to increase confidence in predictions (or measurements or inferences) of environmental characteristics in different locations of the environment, the processor may use a first set of sensor data collected by a first sensor to predict (or measure or infer) an environmental characteristic of a particular location a priori to using a second set of sensor data collected by a second sensor to predict an environmental characteristic of the particular location. Examples of adjustments may include, but are not limited to, adjustments to the speed of components (e.g., a cleaning tool such a main brush or side brush, wheels, impeller, cutting blade, digger, salt or fertilizer distributor, or other component depending on the type of robot), activating/deactivating functions (e.g., UV treatment, sweeping, steam or liquid mopping, vacuuming, mowing, ploughing, salt distribution, fertilizer distribution, digging, and other functions depending on the type of robot), adjustments to movement path, adjustments to the division of the environment into subareas, and operation schedule, etc. In some embodiments, the processor may use a classifier such as a convolutional neural network to classify real-time sensor data of a location within the environment into different environmental characteristic classes such as driving surface types, room or area types, levels of debris accumulation, debris types, debris sizes, traffic level, obstacle density, human activity level, driving surface quality, and the like. In some embodiments, the processor may dynamically and in real-time adjust the speed of components of the robot based on the current environmental characteristics. Initially, the classifier may be trained such that it may properly classify sensor data to different environmental characteristic classes. In some embodiments, training may be executed remotely and trained model parameters may be downloaded to the robot, which is not to suggest that any other operation herein must be performed on the robot. The classifier may be trained by, for example, providing the classifier with training and target data that contains the correct environmental characteristic classifications of the sensor readings within the training data. For example, the classifier may be trained to classify electric current sensor data of a wheel motor into different driving surface types. For instance, if the magnitude of the current drawn by the wheel motor is greater than a particular threshold for a predetermined amount of time, the classifier may classify the current sensor data to a carpet driving surface type class (or other soft driving surface depending on the environment of the robot) with some certainty. In other embodiments, the processor may classify sensor data based on the change in value of the sensor data over a predetermined amount of time or using entropy. For example, the processor may classify current sensor data of a wheel motor into a driving surface type class based on the change in electrical current over a predetermined amount of time or entropy value. In response to predicting an environmental characteristic, such as a driving type, the processor may adjust the speed of components such that they are optimal for operating in an environment with the particular characteristics predicted, such as a predicted driving surface type. In some embodiments, adjusting the speed of components may include adjusting the speed of the motors driving the components. In some embodiments, the processor may also choose actions and/or settings of the robot in response to predicted (or measured or inferred) environmental characteristics of a location. In other examples, the classifier may classify distance sensor data, audio sensor data, or optical sensor data into different environmental characteristic classes (e.g., different driving surface types, room or area types, levels of debris accumulation, debris types, debris sizes, traffic level, obstacle density, human activity level, driving surface quality, etc.).") …
Ebrahimi Afrouzi does not specifically teach a sensitivity for collision detection, a braking time, or collision reference values based on the sensitivity. However, Watanabe in the same field of endeavor of robotics, teaches:
… acquiring sensitivity (Paragraph 0011, "A region of the operation region of the robot arm, which is closer to the work region of the operator, is set as the low-speed operation region, and the collision detection sensitivity in the low-speed operation region is set to be as high as possible. In this setting, even in a case where the operator contacts the robot arm, the robot arm collides against the operator at a low speed, and can be stopped with a high sensitivity. In contrast, the robot arm can be operated at a speed that is as high as possible in the high-speed operation region. As a result, compared to a case where the collision detection sensitivity is not changed between the high-speed operation region and the low-speed operation region, safety for the operator can be improved, and the work efficiency can be increased." as well as Paragraph 0019, "The collision stop section may be configured to change the collision detection sensitivity so that the collision detection sensitivity in the high-speed operation region, the collision detection sensitivity in the at least one intermediate-speed operation region, and the collision detection sensitivity in the low-speed operation region are increased in this order, in the high-speed operation region, the at least one intermediate-speed operation region, and the low-speed operation region." and Paragraph 0070, "Although in the present embodiment, the collision detection sensitivity (the first reference value, the second reference value) at the detection of the collision is manually set, and is changed (switched) between the high-speed operation region 20H and the low-speed operation region 20L, the collision detection sensitivity may be automatically set. Specifically, the controller 3 causes the robot 2 to perform particular work (operation), the robot 2 to learn maximum torque generated in each of the axes 21 to 26 in every operation of the robot 2, and sets the first reference value and the second reference value based on the learned maximum torque. This makes it possible to set the collision detection sensitivity to an optimal value corresponding to an operation environment.") … determining collision detection reference values for each movement state based on the sensitivity; … when the motor current value exceeds the collision detection reference value corresponding to the movement state. (Paragraph 0054, "Then, the collision detecting section 33 determines whether or not the difference current value is larger than a first reference value set for each of the axes (step S4). In a case where the collision detecting section 33 determines that the difference current value is larger than the first reference value, the collision detecting section 33 determines that the collision has occurred, and outputs the collision detection signal indicating that the collision has been detected to the current limiting section 34 (see FIG. 4) (step S5). Then, the current limiting section 34 provides the limited current command value to the current generating circuit 32, and stops (ceases) current supply to the servo motor 28 (step S6).")
However, Perkins, in the same field of endeavor of robotics, teaches:
… and a braking time (Paragraph 0076, "In some embodiments, the one or more operating parameters comprise one or more operating speed limits, such as a travel speed limit of a mobile base of the robot and/or a speed limit of a relevant point in space (e.g., a point located on an exterior surface of a robotic arm, a robotic manipulator, a robotic joint of the robot, or an objected manipulated by the robot). In some embodiments, the one or more operating parameters comprise an operating velocity and/or an acceleration limit. In some embodiments, the computing device receives a velocity and/or an acceleration of the entity, and the one or more operating parameters are based on the velocity and/or the acceleration (e.g., a current velocity and/or acceleration, an immediately prior velocity and/or acceleration, or another suitable vector.). In some embodiments, the computing device determines an operating velocity and/or acceleration limit of the robot, and the operating velocity and/or acceleration limit are included in the set of operating parameters for the robot. In some embodiments, the set of operating parameters comprises one or more stopping time limits (e.g., such that the robot 504 comes to a stop within the stopping time limit and/or the onboard safety systems of the robot 504 would observe the configuration and/or velocities of the robot 504 to confirm it is operating within the stopping time limit).") …
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and operation methods as taught by Ebrahimi Afrouzi with the ability to dynamically adjust the sensitivity and collision reference values as taught by Watanabe as well as with the ability to monitor and adjust operating parameters such as braking time as taught by Perkins. This would ensure efficient and safe operation of the robotic system during use in a variety of situations/environments.
Regarding claim 3, where all the limitations of claim 1 are discussed above, Ebrahimi Afrouzi further teaches:
3. The method of claim 1, wherein the monitoring of the movement state of the serving robot and the motor current value of the serving robot includes:
receiving two motor current values from a motor driver module (Paragraph 0406, "In some embodiments, a map of the environment is separately built from the obstacle map. In some embodiments, an obstacle map is divided into two categories, moving and stationary obstacle maps. In some embodiments, the processor separately builds and maintains each type of obstacle map. In some embodiments, the processor of the robot may detect an obstacle based on an increase in electrical current drawn by a wheel or brush or other component motor. For example, when stuck on an object, the brush motor may draw more current as it experiences resistance cause by impact against the object. In some embodiments, the processor superimposes the obstacle maps with moving and stationary obstacles to form a complete perception of the environment." as well as Paragraph 1232, "In some embodiments, the processor of the robot uses real-time environmental sensor data (or environmental characteristics inferred therefrom) or environmental sensor data aggregated from different working sessions or information from the aggregate map of the environment to dynamically adjust the speed of components and/or activate/deactivate functions of the robot during operation in an environment. For example, an electrical current sensor may be used to measure the amount of current drawn by a motor of a main brush in real-time. The processor may infer the type of driving surface based on the amount current drawn and in response adjusts the speed of components such that they are ideal for the particular driving surface type. For instance, if the current drawn by the motor of the main brush is high, the processor may infer that a robotic vacuum is on carpet, as more power is required to rotate the main brush at a particular speed on carpet as compared to hard flooring (e.g., wood or tile). In response to inferring carpet, the processor may increase the speed of the main brush and impeller (or increase applied torque without changing speed, or increase speed and torque) and reduce the speed of the wheels for a deeper cleaning. Some embodiments may raise or lower a brush in response to a similar inference, e.g., lowering a brush to achieve a deeper clean. In a similar manner, an electrical current sensor that measures the current drawn by a motor of a wheel may be used to predict the type of driving surface, as carpet or grass, for example, requires more current to be drawn by the motor to maintain a particular speed as compared to hard driving surface. In some embodiments, the processor aggregates motor current measured during different working sessions and determines adjustments to speed of components using the aggregated data. In another example, a distance sensor takes distance measurements and the processor infers the type of driving surface using the distance measurements. For instance, the processor infers the type of driving surface from distance measurements of a time-of-flight (“TOF”) sensor positioned on, for example, the bottom surface of the robot as a hard driving surface when, for example, when consistent distance measurements are observed over time (to within a threshold) and soft driving surface when irregularity in readings are observed due to the texture of for example, carpet or grass. In a further example, the processor uses sensor readings of an image sensor with at least one IR illuminator or any other structured light positioned on the bottom side of the robot to infer type of driving surface. The processor observes the signals to infer type of driving surface. For example, driving surfaces such as carpet or grass produce more distorted and scattered signals as compared with hard driving surfaces due to their texture. The processor may use this information to infer the type of driving surface." as well as Paragraph 0239, "In some embodiments, at least a portion of the sensors of the robot are provided in a sensor array, wherein the at least a portion of sensors are coupled to a flexible, semi-flexible, or rigid frame. In some embodiments, the frame is fixed to a chassis or casing of the robot. In some embodiments, the sensors are positioned along the frame such that the field of view of the robot is maximized while the cross-talk or interference between sensors is minimized. In some cases, a component may be placed between adjacent sensors to minimize cross-talk or interference. In some embodiments, the robot may include sensors to detect or sense objects, acceleration, angular and linear movement, temperature, humidity, water, pollution, particles in the air, supplied power, proximity, external motion, device motion, sound signals, ultrasound signals, light signals, fire, smoke, carbon monoxide, global-positioning-satellite (GPS) signals, radio-frequency (RF) signals, other electromagnetic signals or fields, visual features, textures, optical character recognition (OCR) signals, spectrum meters, and the like. In some embodiments, a microprocessor or a microcontroller of the robot may poll a variety of sensors at intervals.") related to each of two wheels provided in the serving robot; (Paragraph 0240, "In some embodiments, the robot may be wheeled (e.g., rigidly fixed, suspended fixed, steerable, suspended steerable, caster, or suspended caster), legged, or tank tracked. In some embodiments, the wheels, legs, tracks, etc. of the robot may be controlled individually or controlled in pairs (e.g., like cars) or in groups of other sizes, such as three or four as in omnidirectional wheels. In some embodiments, the robot may use differential-drive wherein two fixed wheels have a common axis of rotation and angular velocities of the two wheels are equal and opposite such that the robot may rotate on the spot. In some embodiments, the robot may include a terminal device such as those on computers, mobile phones, tablets, or smart wearable devices.") and
… a preset number of times. (Paragraph 1237, "In some embodiments, the processor may use machine learning techniques to predict environmental characteristics using sensor data such that adjustments to speed of components of the robot may be made autonomously and in real-time to accommodate the current environment. In some embodiments, Bayesian methods may be used in predicting environmental characteristics. For example, to increase confidence in predictions (or measurements or inferences) of environmental characteristics in different locations of the environment, the processor may use a first set of sensor data collected by a first sensor to predict (or measure or infer) an environmental characteristic of a particular location a priori to using a second set of sensor data collected by a second sensor to predict an environmental characteristic of the particular location. Examples of adjustments may include, but are not limited to, adjustments to the speed of components (e.g., a cleaning tool such a main brush or side brush, wheels, impeller, cutting blade, digger, salt or fertilizer distributor, or other component depending on the type of robot), activating/deactivating functions (e.g., UV treatment, sweeping, steam or liquid mopping, vacuuming, mowing, ploughing, salt distribution, fertilizer distribution, digging, and other functions depending on the type of robot), adjustments to movement path, adjustments to the division of the environment into subareas, and operation schedule, etc. In some embodiments, the processor may use a classifier such as a convolutional neural network to classify real-time sensor data of a location within the environment into different environmental characteristic classes such as driving surface types, room or area types, levels of debris accumulation, debris types, debris sizes, traffic level, obstacle density, human activity level, driving surface quality, and the like. In some embodiments, the processor may dynamically and in real-time adjust the speed of components of the robot based on the current environmental characteristics. Initially, the classifier may be trained such that it may properly classify sensor data to different environmental characteristic classes. In some embodiments, training may be executed remotely and trained model parameters may be downloaded to the robot, which is not to suggest that any other operation herein must be performed on the robot. The classifier may be trained by, for example, providing the classifier with training and target data that contains the correct environmental characteristic classifications of the sensor readings within the training data. For example, the classifier may be trained to classify electric current sensor data of a wheel motor into different driving surface types. For instance, if the magnitude of the current drawn by the wheel motor is greater than a particular threshold for a predetermined amount of time, the classifier may classify the current sensor data to a carpet driving surface type class (or other soft driving surface depending on the environment of the robot) with some certainty. In other embodiments, the processor may classify sensor data based on the change in value of the sensor data over a predetermined amount of time or using entropy. For example, the processor may classify current sensor data of a wheel motor into a driving surface type class based on the change in electrical current over a predetermined amount of time or entropy value. In response to predicting an environmental characteristic, such as a driving type, the processor may adjust the speed of components such that they are optimal for operating in an environment with the particular characteristics predicted, such as a predicted driving surface type. In some embodiments, adjusting the speed of components may include adjusting the speed of the motors driving the components. In some embodiments, the processor may also choose actions and/or settings of the robot in response to predicted (or measured or inferred) environmental characteristics of a location. In other examples, the classifier may classify distance sensor data, audio sensor data, or optical sensor data into different environmental characteristic classes (e.g., different driving surface types, room or area types, levels of debris accumulation, debris types, debris sizes, traffic level, obstacle density, human activity level, driving surface quality, etc.)." Examiner Note: The sensing is performed at a frequency so therefore the length of time may also be understood as a number of occurrences.)
Ebrahimi Afrouzi does not specifically teach monitoring the current value against the reference. However, Watanabe, in the same field of endeavor of robotics, teaches:
… monitoring whether each of the two motor current values exceeds the collision detection reference value corresponding to the movement state of the serving robot (Paragraph 0054, "Then, the collision detecting section 33 determines whether or not the difference current value is larger than a first reference value set for each of the axes (step S4). In a case where the collision detecting section 33 determines that the difference current value is larger than the first reference value, the collision detecting section 33 determines that the collision has occurred, and outputs the collision detection signal indicating that the collision has been detected to the current limiting section 34 (see FIG. 4) (step S5). Then, the current limiting section 34 provides the limited current command value to the current generating circuit 32, and stops (ceases) current supply to the servo motor 28 (step S6).") …
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and operation methods as taught by Ebrahimi Afrouzi with the ability to monitor and to dynamically adjust the sensitivity and collision reference values as taught by Watanabe. This would ensure efficient and safe operation of the robotic system during use in a variety of situations/environments.
Regarding claim 4, where all the limitations of claim 3 are discussed above, Ebrahimi Afrouzi further teaches:
4. The method of claim 3, wherein the monitoring of whether each of the two motor current values exceeds the collision detection reference value corresponding to the movement state of the serving robot the preset number of times includes:
immediately after the computing device recognizes that at least one of the two motor current values … when the computing device recognizes that a specific motor current value recognized as exceeding the value once … ; or
immediately after the computing device recognizes that at least one of the two motor current values exceeds … when the computing device recognizes that the specific motor current value recognized as exceeding the value once … (Paragraph 1237, "In some embodiments, the processor may use machine learning techniques to predict environmental characteristics using sensor data such that adjustments to speed of components of the robot may be made autonomously and in real-time to accommodate the current environment. In some embodiments, Bayesian methods may be used in predicting environmental characteristics. For example, to increase confidence in predictions (or measurements or inferences) of environmental characteristics in different locations of the environment, the processor may use a first set of sensor data collected by a first sensor to predict (or measure or infer) an environmental characteristic of a particular location a priori to using a second set of sensor data collected by a second sensor to predict an environmental characteristic of the particular location. Examples of adjustments may include, but are not limited to, adjustments to the speed of components (e.g., a cleaning tool such a main brush or side brush, wheels, impeller, cutting blade, digger, salt or fertilizer distributor, or other component depending on the type of robot), activating/deactivating functions (e.g., UV treatment, sweeping, steam or liquid mopping, vacuuming, mowing, ploughing, salt distribution, fertilizer distribution, digging, and other functions depending on the type of robot), adjustments to movement path, adjustments to the division of the environment into subareas, and operation schedule, etc. In some embodiments, the processor may use a classifier such as a convolutional neural network to classify real-time sensor data of a location within the environment into different environmental characteristic classes such as driving surface types, room or area types, levels of debris accumulation, debris types, debris sizes, traffic level, obstacle density, human activity level, driving surface quality, and the like. In some embodiments, the processor may dynamically and in real-time adjust the speed of components of the robot based on the current environmental characteristics. Initially, the classifier may be trained such that it may properly classify sensor data to different environmental characteristic classes. In some embodiments, training may be executed remotely and trained model parameters may be downloaded to the robot, which is not to suggest that any other operation herein must be performed on the robot. The classifier may be trained by, for example, providing the classifier with training and target data that contains the correct environmental characteristic classifications of the sensor readings within the training data. For example, the classifier may be trained to classify electric current sensor data of a wheel motor into different driving surface types. For instance, if the magnitude of the current drawn by the wheel motor is greater than a particular threshold for a predetermined amount of time, the classifier may classify the current sensor data to a carpet driving surface type class (or other soft driving surface depending on the environment of the robot) with some certainty. In other embodiments, the processor may classify sensor data based on the change in value of the sensor data over a predetermined amount of time or using entropy. For example, the processor may classify current sensor data of a wheel motor into a driving surface type class based on the change in electrical current over a predetermined amount of time or entropy value. In response to predicting an environmental characteristic, such as a driving type, the processor may adjust the speed of components such that they are optimal for operating in an environment with the particular characteristics predicted, such as a predicted driving surface type. In some embodiments, adjusting the speed of components may include adjusting the speed of the motors driving the components. In some embodiments, the processor may also choose actions and/or settings of the robot in response to predicted (or measured or inferred) environmental characteristics of a location. In other examples, the classifier may classify distance sensor data, audio sensor data, or optical sensor data into different environmental characteristic classes (e.g., different driving surface types, room or area types, levels of debris accumulation, debris types, debris sizes, traffic level, obstacle density, human activity level, driving surface quality, etc.)." and also see Paragraph 1176, "In employing Bayesian methods, a false positive sensor reading does not cause harm in functionality of the robot as the processor uses an initial sensor reading to only form a prior belief. In some embodiments, the processor may require a second or third observation to form a conclusion and influence of prior belief. If a second observation does not occur within a timely manner (or after a number of counts) the second observation may not be considered a posterior and may not influence a prior belief. In some embodiments, other statistical interpretations may be used. For example, the processor may use a frequentist interpretation wherein a certain frequency of an observation may be required to form a belief. In some embodiments, other simpler implementations for formulating beliefs may be used. In some embodiments, a probability may be associated with each instance of an observation. For example, each observation may count as a 50% probability of the observation being true. In this implementation, a probability of more than 50% may be required for the robot to take action.")
Ebrahimi Afrouzi does not specifically teach using a reference value to determine whether a collision has occurred or not. However, Watanabe, in the same field of endeavor of robotics, teaches:
… exceeds the collision detection reference value once, … is less than or equal to the collision detection reference value, (Paragraph 0054, "Then, the collision detecting section 33 determines whether or not the difference current value is larger than a first reference value set for each of the axes (step S4). In a case where the collision detecting section 33 determines that the difference current value is larger than the first reference value, the collision detecting section 33 determines that the collision has occurred, and outputs the collision detection signal indicating that the collision has been detected to the current limiting section 34 (see FIG. 4) (step S5). Then, the current limiting section 34 provides the limited current command value to the current generating circuit 32, and stops (ceases) current supply to the servo motor 28 (step S6).") recognizing that the serving robot has not collided with anything (Paragraph 0069, "Although in the present embodiment, the collision detecting section 33 promptly stops the operation of the robot arm (10, 12) after detection of the collision, the present invention is not limited to this. For example, in a case where the collision detecting section 33 detects the collision against the obstacle, the controller 3 may perform a stress relieving process for relieving a stress generated between the member to be driven and the obstacle, due to the collision. Specifically, the controller 3 detects whether or not the arm has collided against the obstacle, and controls the operation of the arm so that the arm is moved away a specified distance from the obstacle based on a path before the collision, in a case where the controller 3 detects that the arm has collided against the obstacle. More specifically, in the case of detection of the collision, the collision detecting section 33 performs the following operation. Regarding the axis in which a value obtained by subtracting the actual current value of the servo motor 28 from the theoretical current value of the servo motor 28 has a sign opposite to that of the theoretical current value, the collision detecting section 33 performs a retracting process for moving the axis in a direction opposite to that of the movement of the axis before the collision. On the other hand, regarding the axis in which the value obtained by subtracting the actual current value of the servo motor 28 from the theoretical current value of the servo motor 28 has the same sign as that of the theoretical current value, the collision detecting section 33 performs an advancing process for moving the axis in the same direction as that of the movement of the axis before the collision. This can mitigate an impact of the collision. As a result, safety can be further improved.") … the collision detection reference value once, … exceeds the collision detection reference value, (Paragraph 0054, "Then, the collision detecting section 33 determines whether or not the difference current value is larger than a first reference value set for each of the axes (step S4). In a case where the collision detecting section 33 determines that the difference current value is larger than the first reference value, the collision detecting section 33 determines that the collision has occurred, and outputs the collision detection signal indicating that the collision has been detected to the current limiting section 34 (see FIG. 4) (step S5). Then, the current limiting section 34 provides the limited current command value to the current generating circuit 32, and stops (ceases) current supply to the servo motor 28 (step S6).") recognizing that the serving robot has collided with something. (Paragraph 0069, "Although in the present embodiment, the collision detecting section 33 promptly stops the operation of the robot arm (10, 12) after detection of the collision, the present invention is not limited to this. For example, in a case where the collision detecting section 33 detects the collision against the obstacle, the controller 3 may perform a stress relieving process for relieving a stress generated between the member to be driven and the obstacle, due to the collision. Specifically, the controller 3 detects whether or not the arm has collided against the obstacle, and controls the operation of the arm so that the arm is moved away a specified distance from the obstacle based on a path before the collision, in a case where the controller 3 detects that the arm has collided against the obstacle. More specifically, in the case of detection of the collision, the collision detecting section 33 performs the following operation. Regarding the axis in which a value obtained by subtracting the actual current value of the servo motor 28 from the theoretical current value of the servo motor 28 has a sign opposite to that of the theoretical current value, the collision detecting section 33 performs a retracting process for moving the axis in a direction opposite to that of the movement of the axis before the collision. On the other hand, regarding the axis in which the value obtained by subtracting the actual current value of the servo motor 28 from the theoretical current value of the servo motor 28 has the same sign as that of the theoretical current value, the collision detecting section 33 performs an advancing process for moving the axis in the same direction as that of the movement of the axis before the collision. This can mitigate an impact of the collision. As a result, safety can be further improved.")
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and operating methods as taught by Ebrahimi Afrouzi with the ability to utilize a reference value for determining whether a collision has occurred or not as taught by Watanabe. Incorporating this into the observation methods taught by Ebrahimi Afrouzi would allow the system to avoid erroneous collision detection and ensure a more accurate determination which would allow the robot to operate more effectively.
Regarding claim 5, where all the limitations of claim 1 are discussed above, Ebrahimi Afrouzi further teaches:
5. The method of claim 1, further comprising:
in the monitoring of the movement state, receiving sensing data related to obstacle recognition from the serving robot and monitoring whether an obstacle exists in a movement direction of the serving robot; (Paragraph 0414, "In some embodiments, the processor avoids collisions between the robot and objects (including dynamic objects such as humans and pets) using sensors and a perceived path of the robot. In some embodiments, the executes the path using GPS, previous mappings, or by following along rails. In embodiments wherein the robot follows along rails the processor is not required to make any path planning decisions. The robot follows along the rails and the processor uses SLAM methods to avoid objects, such as humans. In some embodiments, the robot executes the path using markings on the floor that the processor of the robot detects based on sensor data collected by sensors of the robot. The processor uses sensor data to continuously detect and follow markings. In some embodiments, the robot executes the path using digital landmarks positioned along the path. The processor of the robot detects the digital landmarks based on sensor data collected by sensors of the robot. In some embodiments, the robot executes the path by following another robot or vehicle driven by a human. In these various embodiments, the processor may use various techniques to avoid objects. In some embodiments, the processor of the robot may not use the full SLAM solution but may use sensors and perceived information to safely operate. For example, a robot transporting passengers may execute a predetermined path by following observed marking on the road or by driving on a rail and may use sensor data and perceived information during operation to avoid collisions with objects.")
when it is recognized that there is an obstacle in a movement direction of the serving robot, recognizing a type of the obstacle based on the sensing data; and
… based on the type of the obstacle. (Paragraph 0392, " In embodiments, the objects may be classified or unclassified and may be identified or unidentified. In some embodiments, an object is identified when the processor identifies the object in an image of a stream of images (or video) captured by an image sensor of the robot. In some embodiments, upon identifying the object the processor has not yet determined a distance of the object, a classification of the object, or distinguished the object in any way. The processor has simply identified the existence of something in the image worth examining. In some embodiments, the processor may mark a region of the image in which the identified object is positioned with, for example, a question mark within a circle. In embodiments, an object may be any object that is not a part of the room, wherein the room may include at least one of the floor, the walls, the furniture, and the appliances. In some embodiments, an object is detected when the processor detects an object of certain shape, size, and/or distance. This provides an additional layer of detail over identifying the object as some vague characteristics of the object are determined. In some embodiments, an object is classified when the actual object type is determined (e.g., bike, toy car, remote control, keys, etc.). In some embodiments, an object is labelled when the processor classifies the object. However, in some cases, a labelled object may not be successfully classified and the object may be labelled as, for example, “other”. In some embodiments, an object may be labelled automatically by the processor using a classification algorithm or by a user using an application of a communication device (e.g., by choosing from a list of possible labels or creating new labels such as sock, fridge, table, other, etc.). In some embodiments, the user may customize labels by creating a particular label for an object. For example, a user may label a person named Sam by their actual name such that the classification algorithm may classify the person in a class named Sam upon recognizing them in the environment. In such cases, the classification may classify persons by their actual name without the user manually labelling the persons. In some instance, the processor may successfully determine that several faces observed are alike and belong to one person, however may not know which person. Or the processor may recognize a dog but may not know the name of the dog. In some embodiments, the user may label the faces or the dog with the name of the actual person or dog such that the classification algorithm may classify them by name in the future.")
Ebrahimi Afrouzi does not specifically teach a dynamic sensitivity for collision detection. However, Watanabe, in the same field of endeavor of robotics, teaches:
… determining whether to adjust the sensitivity (Paragraph 0011, "A region of the operation region of the robot arm, which is closer to the work region of the operator, is set as the low-speed operation region, and the collision detection sensitivity in the low-speed operation region is set to be as high as possible. In this setting, even in a case where the operator contacts the robot arm, the robot arm collides against the operator at a low speed, and can be stopped with a high sensitivity. In contrast, the robot arm can be operated at a speed that is as high as possible in the high-speed operation region. As a result, compared to a case where the collision detection sensitivity is not changed between the high-speed operation region and the low-speed operation region, safety for the operator can be improved, and the work efficiency can be increased." as well as Paragraph 0019, "The collision stop section may be configured to change the collision detection sensitivity so that the collision detection sensitivity in the high-speed operation region, the collision detection sensitivity in the at least one intermediate-speed operation region, and the collision detection sensitivity in the low-speed operation region are increased in this order, in the high-speed operation region, the at least one intermediate-speed operation region, and the low-speed operation region." and Paragraph 0070, "Although in the present embodiment, the collision detection sensitivity (the first reference value, the second reference value) at the detection of the collision is manually set, and is changed (switched) between the high-speed operation region 20H and the low-speed operation region 20L, the collision detection sensitivity may be automatically set. Specifically, the controller 3 causes the robot 2 to perform particular work (operation), the robot 2 to learn maximum torque generated in each of the axes 21 to 26 in every operation of the robot 2, and sets the first reference value and the second reference value based on the learned maximum torque. This makes it possible to set the collision detection sensitivity to an optimal value corresponding to an operation environment.") …
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and control methods as taught by Ebrahimi Afrouzi with the ability to use a dynamic sensitivity for detecting collisions as taught by Watanabe. This would allow the system to operate at a high level of efficiency while maintaining a safe environment.
Regarding claim 6, where all the limitations of claim 1 are discussed above, Ebrahimi Afrouzi further teaches:
6. The method of claim 1, further comprising:
prior to the monitoring of the movement state, recognizing a plurality of areas included in a space in which the serving robot performs serving; (Paragraph 0238, "The processor may, for example, receive and process data from internal or external sensors, execute commands based on data received, control motors such as wheel motors, map the environment, localize the robot, determine division of the environment into zones, and determine movement paths. In some cases, the robot may include a microcontroller on which computer code required for executing the methods and techniques described herein may be stored.")
…
in the monitoring of the movement state, monitoring whether a current position of the serving robot corresponds to the variable sensitivity area; (Paragraph 0421, "In some embodiments, the processor of the robot may track the position of the robot as the robot moves from a known state to a next discrete state. The next discrete state may be a state within one or more layers of superimposed Cartesian (or other type) coordinate system, wherein some ordered pairs may be marked as possible obstacles. In some embodiments, the processor may use an inverse measurement model when filling obstacle data into the coordinate system to indicate obstacle occupancy, free space, or probability of obstacle occupancy. In some embodiments, the processor of the robot may determine an uncertainty of the pose of the robot and the state space surrounding the robot. In some embodiments, the processor of the robot may use a Markov assumption, wherein each state is a complete summary of the past and used to determine the next state of the robot. In some embodiments, the processor may use a probability distribution to estimate a state of the robot since state transitions occur by actuations that are subject to uncertainties, such as slippage (e.g., slippage while driving on carpet, low-traction flooring, slopes, and over obstacles such as cords and cables). In some embodiments, the probability distribution may be determined based on readings collected by sensors of the robot. In some embodiments, the processor may use an Extended Kalman Filter for non-linear problems. In some embodiments, the processor of the robot may use an ensemble consisting of a large number of virtual copies of the robot, each virtual copy representing a possible state that the real robot is in. In embodiments, the processor may maintain, increase, or decrease the size of the ensemble as needed. In embodiments, the processor may renew, weaken, or strengthen the virtual copy members of the ensemble. In some embodiments, the processor may identify a most feasible member and one or more feasible successors of the most feasible member. In some embodiments, the processor may use maximum likelihood methods to determine the most likely member to correspond with the real robot at each point in time.")
… when the current position of the serving robot corresponds to …
restoring the adjusted sensitivity when the current position of the serving robot deviates from (Paragraph 0238, "The processor may, for example, receive and process data from internal or external sensors, execute commands based on data received, control motors such as wheel motors, map the environment, localize the robot, determine division of the environment into zones, and determine movement paths. In some cases, the robot may include a microcontroller on which computer code required for executing the methods and techniques described herein may be stored.") …
Ebrahimi Afrouzi does not specifically teach a dynamic sensitivity for collision detection. However, Watanabe, in the same field of endeavor of robotics, teaches:
… determining a variable sensitivity area among the plurality of areas; … adjusting the sensitivity based on a type of the variable sensitivity area … the variable sensitivity area; and
after adjusting the sensitivity, … the variable sensitivity area. (Paragraph 0011, "A region of the operation region of the robot arm, which is closer to the work region of the operator, is set as the low-speed operation region, and the collision detection sensitivity in the low-speed operation region is set to be as high as possible. In this setting, even in a case where the operator contacts the robot arm, the robot arm collides against the operator at a low speed, and can be stopped with a high sensitivity. In contrast, the robot arm can be operated at a speed that is as high as possible in the high-speed operation region. As a result, compared to a case where the collision detection sensitivity is not changed between the high-speed operation region and the low-speed operation region, safety for the operator can be improved, and the work efficiency can be increased." as well as Paragraph 0019, "The collision stop section may be configured to change the collision detection sensitivity so that the collision detection sensitivity in the high-speed operation region, the collision detection sensitivity in the at least one intermediate-speed operation region, and the collision detection sensitivity in the low-speed operation region are increased in this order, in the high-speed operation region, the at least one intermediate-speed operation region, and the low-speed operation region." and Paragraph 0070, "Although in the present embodiment, the collision detection sensitivity (the first reference value, the second reference value) at the detection of the collision is manually set, and is changed (switched) between the high-speed operation region 20H and the low-speed operation region 20L, the collision detection sensitivity may be automatically set. Specifically, the controller 3 causes the robot 2 to perform particular work (operation), the robot 2 to learn maximum torque generated in each of the axes 21 to 26 in every operation of the robot 2, and sets the first reference value and the second reference value based on the learned maximum torque. This makes it possible to set the collision detection sensitivity to an optimal value corresponding to an operation environment.")
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and operating methods as taught by Ebrahimi Afrouzi with the ability to assign sensitivity for detection of collisions to different regions determined by the system as taught by Watanabe. This would allow the system to operate at a high level of efficiency while maintaining a safe environment.
Regarding claim 7, where all the limitations of claim 6 are discussed above, Ebrahimi Afrouzi further teaches:
7. The method of claim 6, wherein the determining of the variable sensitivity area among the plurality of areas includes:
recognizing a current congestion level based on at least one of a current time, the number of orders placed in the space, and the number of people who have entered the space; (Paragraph 1226, "In some embodiments, the map of the environment is a grid map wherein the map is divided into cells (e.g., unit tiles in a regular or irregular tiling), each cell representing a different location within the environment. In some embodiments, the processor divides the map to form a grid map. In some embodiments, the map is a Cartesian coordinate map while in other embodiments the map is of another type, such as a polar, homogenous, or spherical coordinate map. In some embodiments, the environmental sensor collects data as the robot navigates throughout the environment or operates within the environment as the processor maps the environment. In some embodiments, the processor associates each or a portion of the environmental sensor readings with the particular cell of the grid map within which the robot was located when the particular sensor readings were taken. In some embodiments, the processor associates environmental characteristics directly measured or inferred from sensor readings with the particular cell within which the robot was located when the particular sensor readings were taken. In some embodiments, the processor associates environmental sensor data obtained from a fixed sensing device and/or another robot with cells of the grid map. In some embodiments, the robot continues to operate within the environment until data from the environmental sensor is collected for each or a select number of cells of the grid map. In some embodiments, the environmental characteristics (predicted or measured or inferred) associated with cells of the grid map include, but are not limited to (which is not to suggest that any other described characteristic is required in all embodiments), a driving surface type, a room or area type, a type of driving surface transition, a level of debris accumulation, a type of debris, a size of debris, a frequency of encountering debris accumulation, day and time of encountering debris accumulation, a level of user activity, a time of user activity, an obstacle density, an obstacle type, an obstacle size, a frequency of encountering a particular obstacle, a day and time of encountering a particular obstacle, a level of traffic, a driving surface quality, a hazard, etc. In some embodiments, the environmental characteristics associated with cells of the grid map are based on sensor data collected during multiple working sessions wherein characteristics are assigned a probability of being true based on observations of the environment over time.")
recognizing at least one area corresponding to the current congestion level among the plurality of areas; (Paragraphs 1229-1230, "In some embodiments, the processor (e.g., of a robot or a remote server system, either one of which (or a combination of which) may implement the various logical operations described herein) determines probabilities of environmental characteristics (e.g., an obstacle, a driving surface type, a type of driving surface transition, a room or area type, a level of debris accumulation, a type or size of debris, obstacle density, level of traffic, driving surface quality, etc.) existing in a particular location of the environment based on current sensor data and sensor data collected during prior work sessions. For example, in some embodiments, the processor updates probabilities of different driving surface types existing in a particular location of the environment based on the currently inferred driving surface type of the particular location and the previously inferred driving surface types of the particular location during prior working sessions of the robot and/or of other robots or fixed sensing devices monitoring the environment. In some embodiments, the processor updates the aggregate map after each work session. In some embodiments, the processor adjusts speed of components and/or activates/deactivates functions based on environmental characteristics with highest probability of existing in the particular location of the robot such that they are ideal for the environmental characteristics predicted. For example, based on aggregate sensory data there is an 85% probability that the type of driving surface in a particular location is hardwood, a 5% probability it is carpet, and a 10% probability it is tile. The processor adjusts the speed of components to ideal speed for hardwood flooring given the high probability of the location having hardwood flooring. Some embodiments may classify unit tiles into a flooring ontology, and entries in that ontology may be mapped in memory to various operational characteristics of actuators of the robot that are to be applied.
In some embodiments, the processor uses the aggregate map to predict areas with high risk of stalling, colliding with obstacles and/or becoming entangled with an obstruction. In some embodiments, the processor records the location of each such occurrence and marks the corresponding grid cell(s) in which the occurrence took place. For example, the processor uses aggregated obstacle sensor data collected over multiple work sessions to determine areas with high probability of collisions or aggregated electrical current sensor of a peripheral brush motor or motor of another device to determine areas with high probability of increased electrical current due to entanglement with an obstruction. In some embodiments, the processor causes the robot to avoid or reduce visitation to such areas.") and
adjusting a size of the at least one area based on the current congestion level to acquire the variable sensitivity area, (Paragraph 1228, "In some embodiments, the processor generates a new grid map with new characteristics associated with each or a portion of the cells of the grid map at each work session. For instance, each unit tile may have associated therewith a plurality of environmental characteristics, like classifications in an ontology or scores in various dimensions like those discussed above. In some embodiments, the processor compiles the map generated at the end of a work session with an aggregate map based on a combination of maps generated during each or a portion of prior work sessions. In some embodiments, the processor directly integrates data collected during a work session into the aggregate map either after the work session or in real-time as data is collected. In some embodiments, the processor aggregates (e.g., consolidates a plurality of values into a single value based on the plurality of values) current sensor data collected with all or a portion of sensor data previously collected during prior working sessions of the robot. In some embodiments, the processor also aggregates all or a portion of sensor data collected by sensors of other robots or fixed sensing devices monitoring the environment." as well as Paragraph 1164, "In some embodiments, the processor may determine the best division of an environment by minimizing a cost function defined as the difference between theoretical (e.g., modeled with uncertainty) area of the environment and the actual area covered. The theoretical area of the environment may be determined by the processor using a map of the environment. The actual area covered may be determined by the processor by recorded movement of the robot using, for example, an odometer or gyroscope. In some embodiments, the processor may determine the best division of the environment by minimizing a cost function dependent on a path taken by the robot comprising the paths taken within each zone and in between zones. The processor may restrict zones to being rectangular (or having some other defined number of vertices or sides) and may restrict the robot to entering a zone at a corner and to driving a serpentine routine (or other driving routine) in either x- or y-direction such that the trajectory ends at another corner of the zone. The cost associated with a particular division of an environment and order of zone coverage may be computed as the sum of the distances of the serpentine path travelled for coverage within each zone and the sum of the distances travelled in between zones (corner to corner). To minimize cost function and improve coverage efficiency zones may be further divided, merged, reordered for coverage and entry/exit points of zones may be adjusted. In some embodiments, the processor of the robot may initiate these actions at random or may target them. In some embodiments, wherein actions are initiated at random (e.g., based on a pseudorandom value) by the processor, the processor may choose a random action such as, dividing, merging or reordering zones, and perform the action. The processor may then optimize entry/exit points for the chosen zones and order of zones. A difference between the new cost and old cost may be computed as Δ=new cost−old cost by the processor wherein an action resulting in a difference <0 is accepted while a difference >0 is accepted with probability exp(−Δ/T) wherein T is a scaling constant. Since cost, in some embodiments, strongly depends on randomly determined actions the processor of the robot, embodiments may evolve ten different instances and after a specified number of iterations may discard a percentage of the worst instances.") and
…
Ebrahimi Afrouzi does not specifically teach a dynamic sensitivity for collision detection. However, Watanabe, in the same field of endeavor of robotics, teaches:
… the variable sensitivity area is an area in which the sensitivity related to the collision detection of the serving robot increases. (Paragraph 0011, "A region of the operation region of the robot arm, which is closer to the work region of the operator, is set as the low-speed operation region, and the collision detection sensitivity in the low-speed operation region is set to be as high as possible. In this setting, even in a case where the operator contacts the robot arm, the robot arm collides against the operator at a low speed, and can be stopped with a high sensitivity. In contrast, the robot arm can be operated at a speed that is as high as possible in the high-speed operation region. As a result, compared to a case where the collision detection sensitivity is not changed between the high-speed operation region and the low-speed operation region, safety for the operator can be improved, and the work efficiency can be increased." as well as Paragraph 0019, "The collision stop section may be configured to change the collision detection sensitivity so that the collision detection sensitivity in the high-speed operation region, the collision detection sensitivity in the at least one intermediate-speed operation region, and the collision detection sensitivity in the low-speed operation region are increased in this order, in the high-speed operation region, the at least one intermediate-speed operation region, and the low-speed operation region." and Paragraph 0070, "Although in the present embodiment, the collision detection sensitivity (the first reference value, the second reference value) at the detection of the collision is manually set, and is changed (switched) between the high-speed operation region 20H and the low-speed operation region 20L, the collision detection sensitivity may be automatically set. Specifically, the controller 3 causes the robot 2 to perform particular work (operation), the robot 2 to learn maximum torque generated in each of the axes 21 to 26 in every operation of the robot 2, and sets the first reference value and the second reference value based on the learned maximum torque. This makes it possible to set the collision detection sensitivity to an optimal value corresponding to an operation environment.")
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and operation of such as taught by Ebrahimi Afrouzi with the ability to adjust the sensitivity for collision detection as taught by Watanabe. This would allow the system to operate at a high level of efficiency while maintaining a safe environment.
Regarding claim 8, where all the limitations of claim 6 are discussed above, Ebrahimi Afrouzi further teaches:
8. The method of claim 6, wherein the determining of the variable sensitivity area among the plurality of areas includes:
acquiring map information on the space in which the serving robot performs the serving; (Paragraph 0238, "The processor may, for example, receive and process data from internal or external sensors, execute commands based on data received, control motors such as wheel motors, map the environment, localize the robot, determine division of the environment into zones, and determine movement paths. In some cases, the robot may include a microcontroller on which computer code required for executing the methods and techniques described herein may be stored.")
recognizing at least one area in which no people move among the plurality of areas based on the map information; (Paragraphs 1160-1162, "In some embodiments, the processor of the robot may determine optimal (e.g., locally or globally) division and coverage of the environment by minimizing a cost function or by maximizing a reward function. In some embodiments, the overall cost function C of a zone or an environment may be calculated by the processor of the robot based on a travel and cleaning cost K and coverage L. In some embodiments, other factors may be inputs to the cost function. The processor may attempt to minimize the travel and cleaning cost K and maximize coverage L. In some embodiments, the processor may determine the travel and cleaning cost K by computing individual cost for each zone and adding the required driving cost between zones. The driving cost between zones may depend on where the robot ended coverage in one zone, and where it begins coverage in a following zone. The cleaning cost may be dependent on factors such as the path of the robot, coverage time, etc. In some embodiments, the processor may determine the coverage based on the square meters of area covered (or otherwise area operated on) by the robot. In some embodiments, the processor of the robot may minimize the total cost function by modifying zones of the environment by, for example, removing, adding, shrinking, expanding, moving and switching the order of coverage of zones. For example, in some embodiments the processor may restrict zones to having rectangular shape, allow the robot to enter or leave a zone at any surface point and permit overlap between rectangular zones to determine optimal zones of an environment. In some embodiments, the processor may include or exclude additional conditions. In some embodiments, the cost accounts for additional features other than or in addition to travel and operating cost and coverage. Examples of features that may be inputs to the cost function may include, coverage, size, and area of the zone, zone overlap with perimeters (e.g., walls, buildings, or other areas the robot cannot travel), location of zones, overlap between zones, location of zones, and shared boundaries between zones. In some embodiments, a hierarchy may be used by the processor to prioritize importance of features (e.g., different weights may be mapped to such features in a differentiable weighted, normalized sum). For example, tier one of a hierarchy may be location of the zones such that traveling distance between sequential zones is minimized and boundaries of sequential zones are shared, tier two may be to avoid perimeters, tier three may be to avoid overlap with other zones and tier four may be to increase coverage.
In some embodiments, the processor may use various functions to further improve optimization of coverage of the environment. These functions may include, a discover function wherein a new small zone may be added to large and uncovered areas, a delete function wherein any zone with size below a certain threshold may be deleted, a step size control function wherein decay of step size in gradient descent may be controlled, a pessimism function wherein any zone with individual operating cost below a certain threshold may be deleted, and a fast grow function wherein any space adjacent to a zone that is predominantly unclaimed by any other zone may be quickly incorporated into the zone.
In some embodiments, to optimize division of zones of an environment, the processor may proceed through the following iteration for each zone of a sequence of zones, beginning with the first zone: expansion of the zone if neighbor cells are empty, movement of the robot to a point in the zone closest to the current position of the robot, addition of a new zone coinciding with the travel path of the robot from its current position to a point in the zone closest to the robot if the length of travel from its current position is significant, execution of a coverage pattern (e.g. boustrophedon) within the zone, and removal of any uncovered cells from the zone.") and
…
Ebrahimi Afrouzi does not specifically teach a dynamic sensitivity for collision detection. However, Watanabe, in the same field of endeavor of robotics, teaches:
… acquiring the at least one area as the variable sensitivity area, and
the variable sensitivity area is an area in which the sensitivity related to the collision detection of the serving robot decreases. (Paragraph 0011, "A region of the operation region of the robot arm, which is closer to the work region of the operator, is set as the low-speed operation region, and the collision detection sensitivity in the low-speed operation region is set to be as high as possible. In this setting, even in a case where the operator contacts the robot arm, the robot arm collides against the operator at a low speed, and can be stopped with a high sensitivity. In contrast, the robot arm can be operated at a speed that is as high as possible in the high-speed operation region. As a result, compared to a case where the collision detection sensitivity is not changed between the high-speed operation region and the low-speed operation region, safety for the operator can be improved, and the work efficiency can be increased." as well as Paragraph 0019, "The collision stop section may be configured to change the collision detection sensitivity so that the collision detection sensitivity in the high-speed operation region, the collision detection sensitivity in the at least one intermediate-speed operation region, and the collision detection sensitivity in the low-speed operation region are increased in this order, in the high-speed operation region, the at least one intermediate-speed operation region, and the low-speed operation region." and Paragraph 0070, "Although in the present embodiment, the collision detection sensitivity (the first reference value, the second reference value) at the detection of the collision is manually set, and is changed (switched) between the high-speed operation region 20H and the low-speed operation region 20L, the collision detection sensitivity may be automatically set. Specifically, the controller 3 causes the robot 2 to perform particular work (operation), the robot 2 to learn maximum torque generated in each of the axes 21 to 26 in every operation of the robot 2, and sets the first reference value and the second reference value based on the learned maximum torque. This makes it possible to set the collision detection sensitivity to an optimal value corresponding to an operation environment.")
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and operation of such as taught by Ebrahimi Afrouzi with the ability to adjust the sensitivity for collision detection as taught by Watanabe. This would allow the system to operate at a high level of efficiency while maintaining a safe environment.
Regarding claim 9, where all the limitations of claim 1 are discussed above, Ebrahimi Afrouzi further teaches:
9. An apparatus comprising:
a memory configured to store one or more instructions; and
a processor configured to execute the one or more instructions stored in the memory,
wherein the processor performs the method of claim 1 by executing the one or more instructions. (Paragraph 1508, "In block diagrams provided herein, illustrated components are depicted as discrete functional blocks, but embodiments are not limited to systems in which the functionality described herein is organized as illustrated. The functionality provided by each of the components may be provided by software or hardware modules that are differently organized than is presently depicted. For example, such software or hardware may be intermingled, conjoined, replicated, broken up, distributed (e.g. within a data center or geographically), or otherwise differently organized. The functionality described herein may be provided by one or more processors of one or more computers executing code stored on a tangible, non-transitory, machine readable medium. In some cases, notwithstanding use of the singular term “medium,” the instructions may be distributed on different storage devices associated with different computing devices, for instance, with each computing device having a different subset of the instructions, an implementation consistent with usage of the singular term “medium” herein. In some cases, third party content delivery networks may host some or all of the information conveyed over networks, in which case, to the extent information (e.g., content) is said to be supplied or otherwise provided, the information may be provided by sending instructions to retrieve that information from a content delivery network.")
Regarding claim 10, Ebrahimi Afrouzi further teaches:
10. A computer program stored in a computer-readable recording medium on which a program for executing a method of collision detection braking control in a serving robot with a computing device is recorded, wherein the method comprises:
… related to collision detection of a serving robot; (Paragraph 0307, "In embodiments, deep learning may be used to improve perception, improve trajectory such that it follows the planned path more accurately, improve coverage, improve obstacle detection and collision prevention, improve decision making such that it is more human-like, improve decision making in situation wherein some data is missing, etc.")
…
monitoring the movement state of the serving robot (Paragraph 0421, "In some embodiments, the processor of the robot may track the position of the robot as the robot moves from a known state to a next discrete state. The next discrete state may be a state within one or more layers of superimposed Cartesian (or other type) coordinate system, wherein some ordered pairs may be marked as possible obstacles. In some embodiments, the processor may use an inverse measurement model when filling obstacle data into the coordinate system to indicate obstacle occupancy, free space, or probability of obstacle occupancy. In some embodiments, the processor of the robot may determine an uncertainty of the pose of the robot and the state space surrounding the robot. In some embodiments, the processor of the robot may use a Markov assumption, wherein each state is a complete summary of the past and used to determine the next state of the robot. In some embodiments, the processor may use a probability distribution to estimate a state of the robot since state transitions occur by actuations that are subject to uncertainties, such as slippage (e.g., slippage while driving on carpet, low-traction flooring, slopes, and over obstacles such as cords and cables). In some embodiments, the probability distribution may be determined based on readings collected by sensors of the robot. In some embodiments, the processor may use an Extended Kalman Filter for non-linear problems. In some embodiments, the processor of the robot may use an ensemble consisting of a large number of virtual copies of the robot, each virtual copy representing a possible state that the real robot is in. In embodiments, the processor may maintain, increase, or decrease the size of the ensemble as needed. In embodiments, the processor may renew, weaken, or strengthen the virtual copy members of the ensemble. In some embodiments, the processor may identify a most feasible member and one or more feasible successors of the most feasible member. In some embodiments, the processor may use maximum likelihood methods to determine the most likely member to correspond with the real robot at each point in time.") and a motor current value of the serving robot; (Paragraph 0406, "In some embodiments, a map of the environment is separately built from the obstacle map. In some embodiments, an obstacle map is divided into two categories, moving and stationary obstacle maps. In some embodiments, the processor separately builds and maintains each type of obstacle map. In some embodiments, the processor of the robot may detect an obstacle based on an increase in electrical current drawn by a wheel or brush or other component motor. For example, when stuck on an object, the brush motor may draw more current as it experiences resistance cause by impact against the object. In some embodiments, the processor superimposes the obstacle maps with moving and stationary obstacles to form a complete perception of the environment." as well as Paragraph 1232, "In some embodiments, the processor of the robot uses real-time environmental sensor data (or environmental characteristics inferred therefrom) or environmental sensor data aggregated from different working sessions or information from the aggregate map of the environment to dynamically adjust the speed of components and/or activate/deactivate functions of the robot during operation in an environment. For example, an electrical current sensor may be used to measure the amount of current drawn by a motor of a main brush in real-time. The processor may infer the type of driving surface based on the amount current drawn and in response adjusts the speed of components such that they are ideal for the particular driving surface type. For instance, if the current drawn by the motor of the main brush is high, the processor may infer that a robotic vacuum is on carpet, as more power is required to rotate the main brush at a particular speed on carpet as compared to hard flooring (e.g., wood or tile). In response to inferring carpet, the processor may increase the speed of the main brush and impeller (or increase applied torque without changing speed, or increase speed and torque) and reduce the speed of the wheels for a deeper cleaning. Some embodiments may raise or lower a brush in response to a similar inference, e.g., lowering a brush to achieve a deeper clean. In a similar manner, an electrical current sensor that measures the current drawn by a motor of a wheel may be used to predict the type of driving surface, as carpet or grass, for example, requires more current to be drawn by the motor to maintain a particular speed as compared to hard driving surface. In some embodiments, the processor aggregates motor current measured during different working sessions and determines adjustments to speed of components using the aggregated data. In another example, a distance sensor takes distance measurements and the processor infers the type of driving surface using the distance measurements. For instance, the processor infers the type of driving surface from distance measurements of a time-of-flight (“TOF”) sensor positioned on, for example, the bottom surface of the robot as a hard driving surface when, for example, when consistent distance measurements are observed over time (to within a threshold) and soft driving surface when irregularity in readings are observed due to the texture of for example, carpet or grass. In a further example, the processor uses sensor readings of an image sensor with at least one IR illuminator or any other structured light positioned on the bottom side of the robot to infer type of driving surface. The processor observes the signals to infer type of driving surface. For example, driving surfaces such as carpet or grass produce more distorted and scattered signals as compared with hard driving surfaces due to their texture. The processor may use this information to infer the type of driving surface.") and
controlling the serving robot to brake for the braking time (Paragraph 1237, "In some embodiments, the processor may use machine learning techniques to predict environmental characteristics using sensor data such that adjustments to speed of components of the robot may be made autonomously and in real-time to accommodate the current environment. In some embodiments, Bayesian methods may be used in predicting environmental characteristics. For example, to increase confidence in predictions (or measurements or inferences) of environmental characteristics in different locations of the environment, the processor may use a first set of sensor data collected by a first sensor to predict (or measure or infer) an environmental characteristic of a particular location a priori to using a second set of sensor data collected by a second sensor to predict an environmental characteristic of the particular location. Examples of adjustments may include, but are not limited to, adjustments to the speed of components (e.g., a cleaning tool such a main brush or side brush, wheels, impeller, cutting blade, digger, salt or fertilizer distributor, or other component depending on the type of robot), activating/deactivating functions (e.g., UV treatment, sweeping, steam or liquid mopping, vacuuming, mowing, ploughing, salt distribution, fertilizer distribution, digging, and other functions depending on the type of robot), adjustments to movement path, adjustments to the division of the environment into subareas, and operation schedule, etc. In some embodiments, the processor may use a classifier such as a convolutional neural network to classify real-time sensor data of a location within the environment into different environmental characteristic classes such as driving surface types, room or area types, levels of debris accumulation, debris types, debris sizes, traffic level, obstacle density, human activity level, driving surface quality, and the like. In some embodiments, the processor may dynamically and in real-time adjust the speed of components of the robot based on the current environmental characteristics. Initially, the classifier may be trained such that it may properly classify sensor data to different environmental characteristic classes. In some embodiments, training may be executed remotely and trained model parameters may be downloaded to the robot, which is not to suggest that any other operation herein must be performed on the robot. The classifier may be trained by, for example, providing the classifier with training and target data that contains the correct environmental characteristic classifications of the sensor readings within the training data. For example, the classifier may be trained to classify electric current sensor data of a wheel motor into different driving surface types. For instance, if the magnitude of the current drawn by the wheel motor is greater than a particular threshold for a predetermined amount of time, the classifier may classify the current sensor data to a carpet driving surface type class (or other soft driving surface depending on the environment of the robot) with some certainty. In other embodiments, the processor may classify sensor data based on the change in value of the sensor data over a predetermined amount of time or using entropy. For example, the processor may classify current sensor data of a wheel motor into a driving surface type class based on the change in electrical current over a predetermined amount of time or entropy value. In response to predicting an environmental characteristic, such as a driving type, the processor may adjust the speed of components such that they are optimal for operating in an environment with the particular characteristics predicted, such as a predicted driving surface type. In some embodiments, adjusting the speed of components may include adjusting the speed of the motors driving the components. In some embodiments, the processor may also choose actions and/or settings of the robot in response to predicted (or measured or inferred) environmental characteristics of a location. In other examples, the classifier may classify distance sensor data, audio sensor data, or optical sensor data into different environmental characteristic classes (e.g., different driving surface types, room or area types, levels of debris accumulation, debris types, debris sizes, traffic level, obstacle density, human activity level, driving surface quality, etc.).") …
Ebrahimi Afrouzi does not specifically teach a sensitivity for collision detection, a braking time, or collision reference values based on the sensitivity. However, Watanabe in the same field of endeavor of robotics, teaches:
… acquiring sensitivity (Paragraph 0011, "A region of the operation region of the robot arm, which is closer to the work region of the operator, is set as the low-speed operation region, and the collision detection sensitivity in the low-speed operation region is set to be as high as possible. In this setting, even in a case where the operator contacts the robot arm, the robot arm collides against the operator at a low speed, and can be stopped with a high sensitivity. In contrast, the robot arm can be operated at a speed that is as high as possible in the high-speed operation region. As a result, compared to a case where the collision detection sensitivity is not changed between the high-speed operation region and the low-speed operation region, safety for the operator can be improved, and the work efficiency can be increased." as well as Paragraph 0019, "The collision stop section may be configured to change the collision detection sensitivity so that the collision detection sensitivity in the high-speed operation region, the collision detection sensitivity in the at least one intermediate-speed operation region, and the collision detection sensitivity in the low-speed operation region are increased in this order, in the high-speed operation region, the at least one intermediate-speed operation region, and the low-speed operation region." and Paragraph 0070, "Although in the present embodiment, the collision detection sensitivity (the first reference value, the second reference value) at the detection of the collision is manually set, and is changed (switched) between the high-speed operation region 20H and the low-speed operation region 20L, the collision detection sensitivity may be automatically set. Specifically, the controller 3 causes the robot 2 to perform particular work (operation), the robot 2 to learn maximum torque generated in each of the axes 21 to 26 in every operation of the robot 2, and sets the first reference value and the second reference value based on the learned maximum torque. This makes it possible to set the collision detection sensitivity to an optimal value corresponding to an operation environment.") … determining collision detection reference values for each movement state based on the sensitivity; … when the motor current value exceeds the collision detection reference value corresponding to the movement state. (Paragraph 0054, "Then, the collision detecting section 33 determines whether or not the difference current value is larger than a first reference value set for each of the axes (step S4). In a case where the collision detecting section 33 determines that the difference current value is larger than the first reference value, the collision detecting section 33 determines that the collision has occurred, and outputs the collision detection signal indicating that the collision has been detected to the current limiting section 34 (see FIG. 4) (step S5). Then, the current limiting section 34 provides the limited current command value to the current generating circuit 32, and stops (ceases) current supply to the servo motor 28 (step S6).")
However, Perkins, in the same field of endeavor of robotics, teaches:
… and a braking time (Paragraph 0076, "In some embodiments, the one or more operating parameters comprise one or more operating speed limits, such as a travel speed limit of a mobile base of the robot and/or a speed limit of a relevant point in space (e.g., a point located on an exterior surface of a robotic arm, a robotic manipulator, a robotic joint of the robot, or an objected manipulated by the robot). In some embodiments, the one or more operating parameters comprise an operating velocity and/or an acceleration limit. In some embodiments, the computing device receives a velocity and/or an acceleration of the entity, and the one or more operating parameters are based on the velocity and/or the acceleration (e.g., a current velocity and/or acceleration, an immediately prior velocity and/or acceleration, or another suitable vector.). In some embodiments, the computing device determines an operating velocity and/or acceleration limit of the robot, and the operating velocity and/or acceleration limit are included in the set of operating parameters for the robot. In some embodiments, the set of operating parameters comprises one or more stopping time limits (e.g., such that the robot 504 comes to a stop within the stopping time limit and/or the onboard safety systems of the robot 504 would observe the configuration and/or velocities of the robot 504 to confirm it is operating within the stopping time limit).") …
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and operation methods as taught by Ebrahimi Afrouzi with the ability to dynamically adjust the sensitivity and collision reference values as taught by Watanabe as well as with the ability to monitor and adjust operating parameters such as braking time as taught by Perkins. This would ensure efficient and safe operation of the robotic system during use in a variety of situations/environments.
Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ebrahimi Afrouzi et in view of Watanabe and Perkins and in further view of Meduna et al. (US 20230182300 A1), hereinafter Meduna.
Regarding claim 2, where all the limitations of claim 1 are discussed above, Ebrahimi Afrouzi does not specifically teach determining collision detection values for the front, back, left, and right of the robot. However, Watanabe, in the same field of endeavor of robotics, teaches:
2. The method of claim 1, wherein the determining of the collision detection reference values for each movement state based on the sensitivity includes determining a plurality of collision detection reference values (Paragraph 0054, "Then, the collision detecting section 33 determines whether or not the difference current value is larger than a first reference value set for each of the axes (step S4). In a case where the collision detecting section 33 determines that the difference current value is larger than the first reference value, the collision detecting section 33 determines that the collision has occurred, and outputs the collision detection signal indicating that the collision has been detected to the current limiting section 34 (see FIG. 4) (step S5). Then, the current limiting section 34 provides the limited current command value to the current generating circuit 32, and stops (ceases) current supply to the servo motor 28 (step S6).") … state of the serving robot based on the sensitivity. (Paragraph 0011, "A region of the operation region of the robot arm, which is closer to the work region of the operator, is set as the low-speed operation region, and the collision detection sensitivity in the low-speed operation region is set to be as high as possible. In this setting, even in a case where the operator contacts the robot arm, the robot arm collides against the operator at a low speed, and can be stopped with a high sensitivity. In contrast, the robot arm can be operated at a speed that is as high as possible in the high-speed operation region. As a result, compared to a case where the collision detection sensitivity is not changed between the high-speed operation region and the low-speed operation region, safety for the operator can be improved, and the work efficiency can be increased." as well as Paragraph 0019, "The collision stop section may be configured to change the collision detection sensitivity so that the collision detection sensitivity in the high-speed operation region, the collision detection sensitivity in the at least one intermediate-speed operation region, and the collision detection sensitivity in the low-speed operation region are increased in this order, in the high-speed operation region, the at least one intermediate-speed operation region, and the low-speed operation region." and Paragraph 0070, "Although in the present embodiment, the collision detection sensitivity (the first reference value, the second reference value) at the detection of the collision is manually set, and is changed (switched) between the high-speed operation region 20H and the low-speed operation region 20L, the collision detection sensitivity may be automatically set. Specifically, the controller 3 causes the robot 2 to perform particular work (operation), the robot 2 to learn maximum torque generated in each of the axes 21 to 26 in every operation of the robot 2, and sets the first reference value and the second reference value based on the learned maximum torque. This makes it possible to set the collision detection sensitivity to an optimal value corresponding to an operation environment.")
However, Meduna, in the same field of endeavor of robotics, teaches:
… corresponding to each of a forward state, a rearward state, a left turn state, and a right turn (Paragraphs 0079-0080, " In some embodiments, the size of the virtual bumper surrounding the robotic component may be fixed (e.g., the size may not be changeable) such that each of the distance sensors is configured to detect objects within a fixed distance (e.g., 2 meters) from the robotic component. In other embodiments, at least some of the distance sensors may be configured to detect objects within a variable distance that can be set based on one or more factors or criteria. Enabling variable control of the size and/or shape of the virtual bumper adds flexibility to the design, such that the virtual bumper may be adapted to different robot operating environments in which having a smaller, larger or differently-shaped virtual bumper may be advantageous.
In some embodiments, the virtual bumper is uniform in that all of the distance sensors used to form the virtual bumper are configured to detect objects within a same distance. In some embodiments, at least some of the distance sensors used to form the virtual bumper are configured to detect objects at different distances to produce a non-uniform virtual bumper around the robotic component (e.g., the virtual bumper may be larger in some directions than other directions). It should be appreciated that “configuring a distance sensor” to detect objects at different distances may be implemented in hardware, software, or some combination of hardware and software. For instance, in some embodiments, the same hardware (e.g., TOF sensors) is used for all distance sensors incorporated into the robotic component, and the size and/or shape of the virtual bumper is changed by altering the way in which distance measurement signals sensed by the distance sensors are processed (e.g., by one or more computer processors, described in more detail below with regard to FIG. 4).") …
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system and operating methods as taught by Ebrahimi Afrouzi with the ability to use reference values and a dynamic sensitivity for collision detection as taught by Watanabe as well as with the concept of a dynamic buffer zone in every direction as taught by Meduna. It would be obvious to apply the logic of a buffer zone around a robot in all directions to the method of detecting a collision using current as suggested by the combination of Ebrahimi Afrouzi with Watanabe and Meduna. This would allow for a variable buffer around the robot so that it may operate most efficiently while maintaining a high level of safety.
Conclusion
The Examiner has cited particular paragraphs or columns and line numbers in the referencesapplied to the claims above for the convenience of the Applicant. Although the specified citations arerepresentative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested of the Applicant in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2141.02 [R-07.2015] VI. A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed Invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert, denied, 469 U.S. 851 (1984). See also MPEP §2123.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATHER KENIRY whose telephone number is (571)270-5468. The examiner can normally be reached M-F 7:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached at (571) 270-5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.J.K./Examiner, Art Unit 3657
/ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657