DETAILED ACTION
This is a non-final Office Action on the merits in response to communications filed by Applicant on December 16th, 2024. Claims 1-11 are currently pending and examined below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JP2023-216540, filed on 12/22/2023.
Information Disclosure Statement
The Information Disclosure Statement(s) filed on 12/16/2024 is/are being considered by the examiner.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-11 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to a abstract idea without significantly more.
STEP 1: Do the claims fall within one of the statutory categories?
Yes, claim(s) 1, 10, and 11 are directed towards a device, method, and a non-transitory computer readable storage medium respectively.
STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon, or an abstract idea?
Yes, the claims are directed to an abstract idea.
Claim 1 is directed towards a mental process, and as such, is directed towards an abstract idea. Claim 1 recites the limitation “makes a prediction for an action that the robot receives from the user at a specific time point after the time points.”. MPEP § 2106.04(a)(2)(III) states that mental processes include observations, evaluations, and judgments. The limitation as claimed is clearly a process involving making an judgment (i.e. making a prediction for an action) based on an observation (i.e. the correspondence information). Making a prediction for an action based on a data set is a common process that can be easily performed in the human mind or with the aid of pen and paper. Therefore, claim 1 is clearly directed towards a mental process, and as such, is directed towards an abstract idea. Claim 10 is directed towards a mental process, and as such, is directed towards an abstract idea. Claim 10 recites the limitation “making a prediction for an action that the robot receives from the user at a specific time point after the time points.”. MPEP § 2106.04(a)(2)(III) states that mental processes include observations, evaluations, and judgments. The limitation as claimed is clearly a process involving making an judgment (i.e. making a prediction for an action) based on an observation (i.e. the correspondence information). Making a prediction for an action based on a data set is a common process that can be easily performed in the human mind or with the aid of pen and paper. Therefore, claim 10 is clearly directed towards a mental process, and as such, is directed towards an abstract idea. Claim 11 is directed towards a mental process, and as such, is directed towards an abstract idea. Claim 11 recites the limitation “make a prediction for an action that the robot receives from the user at a specific time point after the time points.”. MPEP § 2106.04(a)(2)(III) states that mental processes include observations, evaluations, and judgments. The limitation as claimed is clearly a process involving making an judgment (i.e. making a prediction for an action) based on an observation (i.e. the correspondence information). Making a prediction for an action based on a data set is a common process that can be easily performed in the human mind or with the aid of pen and paper. Therefore, claim 11 is clearly directed towards a mental process, and as such, is directed towards an abstract idea.
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
Claim(s) 1, 10, and 11 do not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application. Claim 1 recites the additional limitation “a processor that stores, in a storage, a piece of correspondence information in which a detection result by a sensor at a certain time point”. A processor that stores, in a storage, a piece of correspondence information in which a detection result by a sensor at a certain time point is generic linking and is merely specifying the data to be store which is considered to be insignificant extra-solution activity. The additional limitation “the sensor being included in a robot for detecting at least one action from a user” is generic linking. The additional limitation “is correlated with presence or absence of a predetermined action from outside to the robot within a predetermined period including the certain time point” is insignificant pre-solution data gathering. The additional limitation “based on pieces of correspondence information stored in the storage and corresponding to time points different from one another, the pieces of correspondence information each being the piece of correspondence information” is merely specifying the data to be used by the abstract idea which is considered to be insignificant extra-solution activity. Claim 10 recites the additional limitation “storing, in a storage, a piece of correspondence information in which a detection result by a sensor at a certain time point”. Storing, in a storage, a piece of correspondence information in which a detection result by a sensor at a certain time point is generic linking and is merely specifying the data to be store which is considered to be insignificant extra-solution activity. The additional limitation “the sensor being included in a robot for detecting at least one action from a user” is generic linking. The additional limitation “is correlated with presence or absence of a predetermined action from outside to the robot within a predetermined period including the certain time point” is insignificant pre-solution data gathering. The additional limitation “based on pieces of correspondence information stored in the storage and corresponding to time points different from one another, the pieces of correspondence information each being the piece of correspondence information” is merely specifying the data to be used by the abstract idea which is considered to be insignificant extra-solution activity. Claim 11 recites the additional limitation “store, in a storage, a piece of correspondence information in which a detection result by a sensor at a certain time point”. Store, in a storage, a piece of correspondence information in which a detection result by a sensor at a certain time point is generic linking and is merely specifying the data to be store which is considered to be insignificant extra-solution activity. The additional limitation “the sensor being included in a robot for detecting at least one action from a user” is generic linking. The additional limitation “is correlated with presence or absence of a predetermined action from outside to the robot within a predetermined period including the certain time point” is insignificant pre-solution data gathering. The additional limitation “based on pieces of correspondence information stored in the storage and corresponding to time points different from one another, the pieces of correspondence information each being the piece of correspondence information” is merely specifying the data to be used by the abstract idea which is considered to be insignificant extra-solution activity.
Therefore, it is clear the abstract idea consists of generic linking and insignificant extra-solution activity, which is not indicative of having been integrated into a practical application.
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
Claim(s) 1, 10, and 11 do not recite additional elements that amount to significantly more than the judicial exception.
Claim(s) 1, 10, and 11 do not recite any specific limitations that are not considered to be generic linking, insignificant extra-solution activity, or well-understood, routine, and conventional. A processor that stores, in a storage, a piece of correspondence information in which a detection result by a sensor at a certain time point is generic linking, merely specifying the data to be store which is considered to be insignificant extra-solution activity, and storing and retrieving information in memory is well-understood, routine, and conventional (See MPEP 2106.5(d)(II) and the cases cited therein). The sensor being included in a robot for detecting at least one action from a user is generic linking. Is correlated with presence or absence of a predetermined action from outside to the robot within a predetermined period including the certain time point is insignificant pre-solution data gathering. Based on pieces of correspondence information stored in the storage and corresponding to time points different from one another, the pieces of correspondence information each being the piece of correspondence information is merely specifying the data to be used by the abstract idea which is considered to be insignificant extra-solution activity.
In conclusion, claim(s) 1, 10, and 11 are rejected under 35 U.S.C. 101 because: (a) are directed toward an abstract idea, (b) does not recite additional elements that integrate the judicial exception into a practical application, and (c) does not recite additional elements that amount to significantly more than the judicial exception, it is clear that the claims are directed toward non-statutory subject matter.
Regarding claim 2, this claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because wherein based on the pieces of correspondence information and a detection result by the sensor at a time point corresponding to the specific time point is merely specifying the data to be used and is considered to be insignificant extra-solution activity. The processor makes the prediction for the action that the robot receives from the user at the specific time point is a part of the abstract idea of claim 1. Therefore, claim 2 is also rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 3, this claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because wherein the processor determines based on the detection result by the sensor whether the robot has received the action from the outside is insignificant pre-solution data gathering. in response to determining that the robot has received the action, stores, in the storage, the piece of correspondence information in which information indicating that the robot has received the action is correlated with the detection result by the sensor at the certain time point in the predetermined period including a time point at which the robot received the action is merely specifying the data to be stored and is considered to be insignificant extra-solution activity, and storing and retrieving information in memory is well-understood, routine, and conventional (See MPEP 2106.5(d)(II) and the cases cited therein). Therefore, claim 3 is also rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 4, this claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because wherein based on the pieces of correspondence information corresponding to the time points, the processor predicts a probability that the robot receives the action from the user at the specific time point is a part of the abstract idea of claim 1. Therefore, claim 4 is also rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 8, this claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because wherein the processor performs a predetermined regression analysis using, as explanatory variables, detection results by sensors each being the sensor, the detection results being included in each of the pieces of correspondence information, thereby deriving a regression formula for the probability that the robot receives the action is merely specifying the data to be manipulated and is considered to be insignificant extra solution activity. Derives the probability based on the derived regression formula and detection results by the sensors at a time point corresponding to the specific time point is a part of the abstract idea of claim 1. Therefore, claim 8 is also rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 9, this claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because wherein the processor derives the regression formula in response to a predetermined minimum number of pieces of correspondence information or more being stored in the storage is merely specifying the data required and is insignificant extra solution activity. After deriving the regression formula, each time the processor stores a piece of correspondence information in the storage, updates the regression formula based on a plurality of pieces of correspondence information stored in the storage including the piece of correspondence information most recently stored is insignificant pre-solution data gathering and is merely specifying the data to be manipulated which is insignificant extra-solution activity. Therefore, claim 9 is also rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more.
Therefore, claims 2-9 do not include additional elements that are sufficient to amount to significantly more than the judicial exception, and are therefore rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 5 was not rejected under 35 U.S.C. § 101 because it recites the limitation “wherein the processor causes the robot to make a predetermined motion in response to the probability derived being equal to or more than a predetermined threshold value”. This limitation clearly recites an active control step of the robot using the information generated using the abstract idea, and is therefore indicative of integration into a practical application. Claims 6-7a re not rejected under 35 U.S.C. § 101 as being dependent on claim 5.
The 35 U.S.C. § 101 rejection of the independent claims 1, 10, and 11 can be overcome by amending the claims to recite an active control step of the robot using the information generated from the abstract idea. Active control steps of the same of similar phrasing as those recited in claim 5 would overcome the 35 U.S.C. § 101 rejection of the independent claims 1, 10,and 11.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-7, 10, and 11 is/are rejected under 35 U.S.C. 102(a)(1) and/or (a)(2) as being anticipated by US 10898999 B1 (“Cohen”).
Regarding claim 1, Cohen teaches an information processing device comprising (Cohen: Abstract, “Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selective human-robot interaction. In some implementations, sensor data describing an environment of a robot is received, and a person in the environment of the robot is detected based on the sensor data. Scores indicative of properties of the detected person are generated based on the sensor data and processed using a machine learning model. Processing the scores can produce one or more outputs indicative of a likelihood that the detected person will perform a predetermined action in response to communication from the robot. Based on the one or more outputs of the machine learning model, the robot initiates communication with the detected person.”)
a processor that stores, in a storage, a piece of correspondence information in which a detection result by a sensor at a certain time point, the sensor being included in a robot for detecting at least one action from a user (Cohen: Column 4 lines 51-67, “Another innovative aspect of the subject matter described in this specification is embodied in methods that include the actions of: receiving, by the one or more computing devices, sensor data corresponding to human-robot interactions in which a robot attempted to obtain assistance from a human to perform an action; receiving, by the one or more computing devices, result data indicating whether each of the human-robot interactions resulted in a human assisting a robot to perform the action; training, by the one or more computing devices, a predictive model based on the sensor data and the result data to indicate, in response to input data describing a human near a robot, a likelihood that the human will perform the action if the robot initiates communication with the human; and providing, by the one or more computing devices, the predictive model to a robot, the robot being configured to use the predictive model to select people to interact with to perform the action.”, Column 10 lines 9-18, “After requesting input and assistance from the selected person, the robot 110 may provide and store data indicating the results of the attempted interaction. The sensor data 120 and the data indicating whether the target action was successfully completed, as well as the type of interaction requested, may be used to further refine the interaction prediction model 130. This data may be used to update the model 130 for the particular robot 110, and/or maybe provided to a server system to update one or more models used by other robots.”, Column 10 lines 38-44, “The robot 110 includes a variety of sensors 220 which enabled the robot 110 to obtain information regarding in the environment of the robot 110. Examples of these sensors 220 include a microphone, camera, and LIDAR module, a radar module, and infrared detector. Other sensors, such as a GPS receiver, accelerometers, force sensors, can indicate the current context of the robot 110.”, Column 10 lines 45-62, “The sensor data for the various sensors may be time-stamped or synchronized so that different types of sensor data can be mapped together to indicate different observed parameters that occur at the same time.”, Column 16 line 62 – Column 17 line 6, “Sensor data corresponding to human-robot interactions is received (502). The sensor data can indicate interactions in which a robot attempted to obtain assistance from a person to perform an action. The sensor data can be accompanied by data indicating other information related to the interaction, such as the particular action that was targeted, the type of communication initiated by the robot, and so on. In addition to or instead of sensor data, one or more scores derived from sensor data can be obtained. For example, scores indicating how many people were present, activities and attributes of the people present, environmental factors, and other data can be obtained.”, Column 17 lines 6-9, “Result data is obtained, where the result data indicates whether each of the human-robot interactions resulted in a person assisting a robot to perform the action (504).”. The cited passages clearly shows that the system is configured to save, in memory, sensor data corresponding to human-robot interaction. The cited passages further show that the robot includes sensors configured to detect human actions.),
is correlated with presence or absence of a predetermined action from outside to the robot within a predetermined period including the certain time point (Cohen: Column 3 lines 61-67, “Receiving the sensor data includes receiving sensor data for a time period before the robot performs the action and a time period after the robot performs the action.”, Column 10 lines 45-62, “The sensor data for the various sensors may be time-stamped or synchronized so that different types of sensor data can be mapped together to indicate different observed parameters that occur at the same time.”, Column 13 lines 11-30, “For example, features may be determined based on data captured in a particular amount of time, e.g., the previous second, previous 5 seconds, previous minute, etc., or based on a number of measurements, e.g., the previous 5 measurements, the previous 50 measurements, etc.”, Column 16 line 62 – Column 17 line 6, “Sensor data corresponding to human-robot interactions is received (502). The sensor data can indicate interactions in which a robot attempted to obtain assistance from a person to perform an action. The sensor data can be accompanied by data indicating other information related to the interaction, such as the particular action that was targeted, the type of communication initiated by the robot, and so on. In addition to or instead of sensor data, one or more scores derived from sensor data can be obtained. For example, scores indicating how many people were present, activities and attributes of the people present, environmental factors, and other data can be obtained.”, Column 17 lines 6-9, “Result data is obtained, where the result data indicates whether each of the human-robot interactions resulted in a person assisting a robot to perform the action (504).”. The cited passages clearly shows that the sensor data is associated with whether a human performed an action to the robot and that this sensor data is correlated to time, and can be take over a predetermined time period.), and
based on pieces of correspondence information stored in the storage and corresponding to time points different from one another, the pieces of correspondence information each being the piece of correspondence information, makes a prediction for an action that the robot receives from the user at a specific time point after the time points (Cohen: Column 4 lines 51-67, “Another innovative aspect of the subject matter described in this specification is embodied in methods that include the actions of: receiving, by the one or more computing devices, sensor data corresponding to human-robot interactions in which a robot attempted to obtain assistance from a human to perform an action; receiving, by the one or more computing devices, result data indicating whether each of the human-robot interactions resulted in a human assisting a robot to perform the action; training, by the one or more computing devices, a predictive model based on the sensor data and the result data to indicate, in response to input data describing a human near a robot, a likelihood that the human will perform the action if the robot initiates communication with the human; and providing, by the one or more computing devices, the predictive model to a robot, the robot being configured to use the predictive model to select people to interact with to perform the action.”, Column 7 lines 22-41, “Not only can the machine learning model indicate a user's disposition to communication, the machine learning model can learn the capability to predict a likelihood that interaction initiated by the robot 110 will result in a specific type of action by a person, e.g., orienting the robot, loading an object onto or unloading an object from the robot, providing a desired type of information, and so on.”, Column 17 lines 10-14, “A predictive model is trained based on the sensor data and the result data (506). The predictive model is trained to indicate, in response to input data describing a person near a robot, a likelihood that the human will performing the action if the robot initiates communication with the person.”. The cited passages clearly shows that the system is configured to make a prediction on whether or not a human will interact with the robot based on the correspondence data.).
Regarding claim 2, Cohen teaches wherein based on the pieces of correspondence information and a detection result by the sensor at a time point corresponding to the specific time point, the processor makes the prediction for the action that the robot receives from the user at the specific time point (Cohen: Column 4 lines 51-67, “Another innovative aspect of the subject matter described in this specification is embodied in methods that include the actions of: receiving, by the one or more computing devices, sensor data corresponding to human-robot interactions in which a robot attempted to obtain assistance from a human to perform an action; receiving, by the one or more computing devices, result data indicating whether each of the human-robot interactions resulted in a human assisting a robot to perform the action; training, by the one or more computing devices, a predictive model based on the sensor data and the result data to indicate, in response to input data describing a human near a robot, a likelihood that the human will perform the action if the robot initiates communication with the human; and providing, by the one or more computing devices, the predictive model to a robot, the robot being configured to use the predictive model to select people to interact with to perform the action.”, Column 7 lines 22-41, “Not only can the machine learning model indicate a user's disposition to communication, the machine learning model can learn the capability to predict a likelihood that interaction initiated by the robot 110 will result in a specific type of action by a person, e.g., orienting the robot, loading an object onto or unloading an object from the robot, providing a desired type of information, and so on.”, Column 10 lines 38-44, “The robot 110 includes a variety of sensors 220 which enabled the robot 110 to obtain information regarding in the environment of the robot 110. Examples of these sensors 220 include a microphone, camera, and LIDAR module, a radar module, and infrared detector. Other sensors, such as a GPS receiver, accelerometers, force sensors, can indicate the current context of the robot 110.”, Column 10 lines 45-62, “The sensor data for the various sensors may be time-stamped or synchronized so that different types of sensor data can be mapped together to indicate different observed parameters that occur at the same time.”, Column 16 line 62 – Column 17 line 6, “Sensor data corresponding to human-robot interactions is received (502). The sensor data can indicate interactions in which a robot attempted to obtain assistance from a person to perform an action. The sensor data can be accompanied by data indicating other information related to the interaction, such as the particular action that was targeted, the type of communication initiated by the robot, and so on. In addition to or instead of sensor data, one or more scores derived from sensor data can be obtained. For example, scores indicating how many people were present, activities and attributes of the people present, environmental factors, and other data can be obtained.”, Column 17 lines 6-9, “Result data is obtained, where the result data indicates whether each of the human-robot interactions resulted in a person assisting a robot to perform the action (504).”. The cited passages clearly shows that the system is configured to make a prediction that a human will interact with a user based on sensor data that corresponds to results data (the result data being whether the human interacted with the robot or not)).
Regarding claim 3, Cohen teaches wherein the processor determines based on the detection result by the sensor whether the robot has received the action from the outside (Cohen: Column 10 lines 9-18, “After requesting input and assistance from the selected person, the robot 110 may provide and store data indicating the results of the attempted interaction. The sensor data 120 and the data indicating whether the target action was successfully completed, as well as the type of interaction requested, may be used to further refine the interaction prediction model 130. This data may be used to update the model 130 for the particular robot 110, and/or maybe provided to a server system to update one or more models used by other robots.”, Column 10 lines 38-44, “The robot 110 includes a variety of sensors 220 which enabled the robot 110 to obtain information regarding in the environment of the robot 110. Examples of these sensors 220 include a microphone, camera, and LIDAR module, a radar module, and infrared detector. Other sensors, such as a GPS receiver, accelerometers, force sensors, can indicate the current context of the robot 110.”, Column 10 lines 45-62, “The sensor data for the various sensors may be time-stamped or synchronized so that different types of sensor data can be mapped together to indicate different observed parameters that occur at the same time.”, Column 16 line 62 – Column 17 line 6, “Sensor data corresponding to human-robot interactions is received (502). The sensor data can indicate interactions in which a robot attempted to obtain assistance from a person to perform an action. The sensor data can be accompanied by data indicating other information related to the interaction, such as the particular action that was targeted, the type of communication initiated by the robot, and so on. In addition to or instead of sensor data, one or more scores derived from sensor data can be obtained. For example, scores indicating how many people were present, activities and attributes of the people present, environmental factors, and other data can be obtained.”, Column 17 lines 6-9, “Result data is obtained, where the result data indicates whether each of the human-robot interactions resulted in a person assisting a robot to perform the action (504).”. The cited passages clearly show that the system uses a detection results by a sensor to determine if a human has interacted with the robot.), and
in response to determining that the robot has received the action, stores, in the storage, the piece of correspondence information in which information indicating that the robot has received the action is correlated with the detection result by the sensor at the certain time point in the predetermined period including a time point at which the robot received the action (Cohen: Cohen: Column 3 lines 61-67, “Receiving the sensor data includes receiving sensor data for a time period before the robot performs the action and a time period after the robot performs the action.”, Column 10 lines 9-18, “After requesting input and assistance from the selected person, the robot 110 may provide and store data indicating the results of the attempted interaction. The sensor data 120 and the data indicating whether the target action was successfully completed, as well as the type of interaction requested, may be used to further refine the interaction prediction model 130. This data may be used to update the model 130 for the particular robot 110, and/or maybe provided to a server system to update one or more models used by other robots.”, Column 10 lines 45-62, “The sensor data for the various sensors may be time-stamped or synchronized so that different types of sensor data can be mapped together to indicate different observed parameters that occur at the same time.”, Column 13 lines 11-30, “For example, features may be determined based on data captured in a particular amount of time, e.g., the previous second, previous 5 seconds, previous minute, etc., or based on a number of measurements, e.g., the previous 5 measurements, the previous 50 measurements, etc.”, Column 16 line 62 – Column 17 line 6, “Sensor data corresponding to human-robot interactions is received (502). The sensor data can indicate interactions in which a robot attempted to obtain assistance from a person to perform an action. The sensor data can be accompanied by data indicating other information related to the interaction, such as the particular action that was targeted, the type of communication initiated by the robot, and so on. In addition to or instead of sensor data, one or more scores derived from sensor data can be obtained. For example, scores indicating how many people were present, activities and attributes of the people present, environmental factors, and other data can be obtained.”, Column 17 lines 6-9, “Result data is obtained, where the result data indicates whether each of the human-robot interactions resulted in a person assisting a robot to perform the action (504).”. The cited passages clearly shows that the sensor data is associated with whether a human performed an action to the robot and that this sensor data is correlated to time, and can be take over a predetermined time period.).
Regarding claim 4, Cohen teaches wherein based on the pieces of correspondence information corresponding to the time points, the processor predicts a probability that the robot receives the action from the user at the specific time point (Cohen: Column 8 lines 55-67, “The model 130 may provide outputs 135 indicative of a likelihood that each person will successfully assist the robot. For example, when each individual person is detected, the sensor data 120 collected by the robot 110 maybe segmented or pre-processed to isolate data sets that each represent properties of an individual person. Scores or portions of the sensor data 120 corresponding to a specific person can be provided to the model 130 to generate a score 135 for that person. In FIG. 1, the scores 135 are illustrated as probability measures, for example, that the person 145 has a 70% likelihood of successfully performing the target action if requested by the robot 110.”, Column 16 line 62 – Column 17 line 6, “Sensor data corresponding to human-robot interactions is received (502). The sensor data can indicate interactions in which a robot attempted to obtain assistance from a person to perform an action. The sensor data can be accompanied by data indicating other information related to the interaction, such as the particular action that was targeted, the type of communication initiated by the robot, and so on. In addition to or instead of sensor data, one or more scores derived from sensor data can be obtained. For example, scores indicating how many people were present, activities and attributes of the people present, environmental factors, and other data can be obtained.”, Column 17 lines 6-9, “Result data is obtained, where the result data indicates whether each of the human-robot interactions resulted in a person assisting a robot to perform the action (504).”. The cited passages clearly shows that the system is configured to determine a probability that a human will interact with the robot based on the correspondence information.).
Regarding claim 5, Cohen teaches wherein the processor causes the robot to make a predetermined motion in response to the probability derived being equal to or more than a predetermined threshold value (Cohen: Column 4 lines 9-31, “In some implementations, determining the direction of travel includes determining a direction of travel that moves the robot closer to the detected person based on determining that the one or more outputs of the machine learning model indicate at least a threshold likelihood that the detected person will perform a predetermined action in response to communication from the robot. For example, the robot can travel in a direction that brings the robot closer to a current position of the detected person, or closer to an estimated future position of the detected person inferred from the detected person's current or recent movement. Other types of travel can also be set for the robot.”, Column 9 lines 22-37, “The selection module 140 can also determine whether the likelihood indicated for a particular person satisfies at least a minimum threshold, for example, a minimum 50% likelihood of success. If the likelihood does not satisfy the minimum threshold, the robot 110 may decline to communicate with the person 145, for example, waiting until a person having a higher likelihood score is identified.”, Column 16 lines 25-35, “Based on the one or more outputs of the machine learning model, one or more computing devices cause the robot to initiate communication with the detected person (410). The computing device may also compare the score to one or more thresholds to determine whether at least a minimum likelihood of success is indicated, and initiate communication with the person in response.”. The cited passages clearly shows that the robot is configured to make a predetermined motion when it is determined that the probability that a human will interact with the robot is above a predetermined threshold.).
Regarding claim 6, wherein the processor stores, in the storage, the piece of correspondence information in which the detection result by the sensor at the certain time point corresponding to a time point at which the robot started to make the motion is correlated with the presence or absence of the action from the outside to the robot within, of the predetermined period, a predetermined time from the start of the motion (Cohen: Column 4 lines 32-45, “In some implementations, the method includes, before causing the robot to initiate communication with the detected person, repeating a set of operations comprising: obtaining additional sensor data, generating additional scores indicating properties of the detected person based on the additional sensor data, processing the additional scores using the machine learning model to generate additional output corresponding to the detected person, and evaluating the additional output of the machine learning model. The method also includes determining that one or more of the additional outputs of the machine learning model satisfies a threshold. Causing the robot to initiate communication with the detected person is performed in response to determining that the additional output satisfies the threshold.”, Column 10 lines 9-18, “After requesting input and assistance from the selected person, the robot 110 may provide and store data indicating the results of the attempted interaction. The sensor data 120 and the data indicating whether the target action was successfully completed, as well as the type of interaction requested, may be used to further refine the interaction prediction model 130. This data may be used to update the model 130 for the particular robot 110, and/or maybe provided to a server system to update one or more models used by other robots.”, Column 10 lines 45-62, “The sensor data for the various sensors may be time-stamped or synchronized so that different types of sensor data can be mapped together to indicate different observed parameters that occur at the same time.”, Column 13 lines 11-30, “For example, features may be determined based on data captured in a particular amount of time, e.g., the previous second, previous 5 seconds, previous minute, etc., or based on a number of measurements, e.g., the previous 5 measurements, the previous 50 measurements, etc.”, Column 16 line 62 – Column 17 line 6, “Sensor data corresponding to human-robot interactions is received (502). The sensor data can indicate interactions in which a robot attempted to obtain assistance from a person to perform an action. The sensor data can be accompanied by data indicating other information related to the interaction, such as the particular action that was targeted, the type of communication initiated by the robot, and so on. In addition to or instead of sensor data, one or more scores derived from sensor data can be obtained. For example, scores indicating how many people were present, activities and attributes of the people present, environmental factors, and other data can be obtained.”, Column 17 lines 6-9, “Result data is obtained, where the result data indicates whether each of the human-robot interactions resulted in a person assisting a robot to perform the action (504).”. The cited passages clearly shows that the sensor data is associated with whether a human performed an action to the robot and that this sensor data is correlated to time, and can be take over a predetermined time period. Additionally, the cited passages shows that the robot can be configured to perform the process again after it has moved to a person and prior to interacting with said person.).
Regarding claim 7, Cohen teaches wherein the processor derives an evaluation value of the motion based on the presence or absence of the action from the outside to the robot within the predetermined time from the start of the motion, and based on the derived evaluation value, adjusts a content of the motion (Cohen: Column 4 lines 32-45, “In some implementations, the method includes, before causing the robot to initiate communication with the detected person, repeating a set of operations comprising: obtaining additional sensor data, generating additional scores indicating properties of the detected person based on the additional sensor data, processing the additional scores using the machine learning model to generate additional output corresponding to the detected person, and evaluating the additional output of the machine learning model. The method also includes determining that one or more of the additional outputs of the machine learning model satisfies a threshold. Causing the robot to initiate communication with the detected person is performed in response to determining that the additional output satisfies the threshold.”. The cited passages clearly shows that the robot is configured to calculate additional scores after motion has begun, based on the sensor and result data, and can further adjust the motion (i.e. perform the interaction of not) based on this additional score.).
Regarding claim 10, Cohen teaches an information processing method that is performed by a computer, comprising (Cohen: Abstract, “Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selective human-robot interaction. In some implementations, sensor data describing an environment of a robot is received, and a person in the environment of the robot is detected based on the sensor data. Scores indicative of properties of the detected person are generated based on the sensor data and processed using a machine learning model. Processing the scores can produce one or more outputs indicative of a likelihood that the detected person will perform a predetermined action in response to communication from the robot. Based on the one or more outputs of the machine learning model, the robot initiates communication with the detected person.”):
storing, in a storage, a piece of correspondence information in which a detection result by a sensor at a certain time point, the sensor being included in a robot for detecting at least one action from a user (Cohen: Column 4 lines 51-67, “Another innovative aspect of the subject matter described in this specification is embodied in methods that include the actions of: receiving, by the one or more computing devices, sensor data corresponding to human-robot interactions in which a robot attempted to obtain assistance from a human to perform an action; receiving, by the one or more computing devices, result data indicating whether each of the human-robot interactions resulted in a human assisting a robot to perform the action; training, by the one or more computing devices, a predictive model based on the sensor data and the result data to indicate, in response to input data describing a human near a robot, a likelihood that the human will perform the action if the robot initiates communication with the human; and providing, by the one or more computing devices, the predictive model to a robot, the robot being configured to use the predictive model to select people to interact with to perform the action.”, Column 10 lines 9-18, “After requesting input and assistance from the selected person, the robot 110 may provide and store data indicating the results of the attempted interaction. The sensor data 120 and the data indicating whether the target action was successfully completed, as well as the type of interaction requested, may be used to further refine the interaction prediction model 130. This data may be used to update the model 130 for the particular robot 110, and/or maybe provided to a server system to update one or more models used by other robots.”, Column 10 lines 38-44, “The robot 110 includes a variety of sensors 220 which enabled the robot 110 to obtain information regarding in the environment of the robot 110. Examples of these sensors 220 include a microphone, camera, and LIDAR module, a radar module, and infrared detector. Other sensors, such as a GPS receiver, accelerometers, force sensors, can indicate the current context of the robot 110.”, Column 10 lines 45-62, “The sensor data for the various sensors may be time-stamped or synchronized so that different types of sensor data can be mapped together to indicate different observed parameters that occur at the same time.”, Column 16 line 62 – Column 17 line 6, “Sensor data corresponding to human-robot interactions is received (502). The sensor data can indicate interactions in which a robot attempted to obtain assistance from a person to perform an action. The sensor data can be accompanied by data indicating other information related to the interaction, such as the particular action that was targeted, the type of communication initiated by the robot, and so on. In addition to or instead of sensor data, one or more scores derived from sensor data can be obtained. For example, scores indicating how many people were present, activities and attributes of the people present, environmental factors, and other data can be obtained.”, Column 17 lines 6-9, “Result data is obtained, where the result data indicates whether each of the human-robot interactions resulted in a person assisting a robot to perform the action (504).”. The cited passages clearly shows that the system is configured to save, in memory, sensor data corresponding to human-robot interaction. The cited passages further show that the robot includes sensors configured to detect human actions.),
is correlated with presence or absence of a predetermined action from outside to the robot within a predetermined period including the certain time point (Cohen: Column 3 lines 61-67, “Receiving the sensor data includes receiving sensor data for a time period before the robot performs the action and a time period after the robot performs the action.”, Column 10 lines 45-62, “The sensor data for the various sensors may be time-stamped or synchronized so that different types of sensor data can be mapped together to indicate different observed parameters that occur at the same time.”, Column 13 lines 11-30, “For example, features may be determined based on data captured in a particular amount of time, e.g., the previous second, previous 5 seconds, previous minute, etc., or based on a number of measurements, e.g., the previous 5 measurements, the previous 50 measurements, etc.”, Column 16 line 62 – Column 17 line 6, “Sensor data corresponding to human-robot interactions is received (502). The sensor data can indicate interactions in which a robot attempted to obtain assistance from a person to perform an action. The sensor data can be accompanied by data indicating other information related to the interaction, such as the particular action that was targeted, the type of communication initiated by the robot, and so on. In addition to or instead of sensor data, one or more scores derived from sensor data can be obtained. For example, scores indicating how many people were present, activities and attributes of the people present, environmental factors, and other data can be obtained.”, Column 17 lines 6-9, “Result data is obtained, where the result data indicates whether each of the human-robot interactions resulted in a person assisting a robot to perform the action (504).”. The cited passages clearly shows that the sensor data is associated with whether a human performed an action to the robot and that this sensor data is correlated to time, and can be take over a predetermined time period.); and
based on pieces of correspondence information stored in the storage and corresponding to time points different from one another, the pieces of correspondence information each being the piece of correspondence information, making a prediction for an action that the robot receives from the user at a specific time point after the time points (Cohen: Column 4 lines 51-67, “Another innovative aspect of the subject matter described in this specification is embodied in methods that include the actions of: receiving, by the one or more computing devices, sensor data corresponding to human-robot interactions in which a robot attempted to obtain assistance from a human to perform an action; receiving, by the one or more computing devices, result data indicating whether each of the human-robot interactions resulted in a human assisting a robot to perform the action; training, by the one or more computing devices, a predictive model based on the sensor data and the result data to indicate, in response to input data describing a human near a robot, a likelihood that the human will perform the action if the robot initiates communication with the human; and providing, by the one or more computing devices, the predictive model to a robot, the robot being configured to use the predictive model to select people to interact with to perform the action.”, Column 7 lines 22-41, “Not only can the machine learning model indicate a user's disposition to communication, the machine learning model can learn the capability to predict a likelihood that interaction initiated by the robot 110 will result in a specific type of action by a person, e.g., orienting the robot, loading an object onto or unloading an object from the robot, providing a desired type of information, and so on.”, Column 17 lines 10-14, “A predictive model is trained based on the sensor data and the result data (506). The predictive model is trained to indicate, in response to input data describing a person near a robot, a likelihood that the human will performing the action if the robot initiates communication with the person.”. The cited passages clearly shows that the system is configured to make a prediction on whether or not a human will interact with the robot based on the correspondence data.).
Regarding claim 11, Cohen teaches a non-transitory computer-readable storage medium storing a program causing a computer to (Cohen: Abstract, “Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selective human-robot interaction. In some implementations, sensor data describing an environment of a robot is received, and a person in the environment of the robot is detected based on the sensor data. Scores indicative of properties of the detected person are generated based on the sensor data and processed using a machine learning model. Processing the scores can produce one or more outputs indicative of a likelihood that the detected person will perform a predetermined action in response to communication from the robot. Based on the one or more outputs of the machine learning model, the robot initiates communication with the detected person.”):
store, in a storage, a piece of correspondence information in which a detection result by a sensor at a certain time point, the sensor being included in a robot for detecting at least one action from a user (Cohen: Column 4 lines 51-67, “Another innovative aspect of the subject matter described in this specification is embodied in methods that include the actions of: receiving, by the one or more computing devices, sensor data corresponding to human-robot interactions in which a robot attempted to obtain assistance from a human to perform an action; receiving, by the one or more computing devices, result data indicating whether each of the human-robot interactions resulted in a human assisting a robot to perform the action; training, by the one or more computing devices, a predictive model based on the sensor data and the result data to indicate, in response to input data describing a human near a robot, a likelihood that the human will perform the action if the robot initiates communication with the human; and providing, by the one or more computing devices, the predictive model to a robot, the robot being configured to use the predictive model to select people to interact with to perform the action.”, Column 10 lines 9-18, “After requesting input and assistance from the selected person, the robot 110 may provide and store data indicating the results of the attempted interaction. The sensor data 120 and the data indicating whether the target action was successfully completed, as well as the type of interaction requested, may be used to further refine the interaction prediction model 130. This data may be used to update the model 130 for the particular robot 110, and/or maybe provided to a server system to update one or more models used by other robots.”, Column 10 lines 38-44, “The robot 110 includes a variety of sensors 220 which enabled the robot 110 to obtain information regarding in the environment of the robot 110. Examples of these sensors 220 include a microphone, camera, and LIDAR module, a radar module, and infrared detector. Other sensors, such as a GPS receiver, accelerometers, force sensors, can indicate the current context of the robot 110.”, Column 10 lines 45-62, “The sensor data for the various sensors may be time-stamped or synchronized so that different types of sensor data can be mapped together to indicate different observed parameters that occur at the same time.”, Column 16 line 62 – Column 17 line 6, “Sensor data corresponding to human-robot interactions is received (502). The sensor data can indicate interactions in which a robot attempted to obtain assistance from a person to perform an action. The sensor data can be accompanied by data indicating other information related to the interaction, such as the particular action that was targeted, the type of communication initiated by the robot, and so on. In addition to or instead of sensor data, one or more scores derived from sensor data can be obtained. For example, scores indicating how many people were present, activities and attributes of the people present, environmental factors, and other data can be obtained.”, Column 17 lines 6-9, “Result data is obtained, where the result data indicates whether each of the human-robot interactions resulted in a person assisting a robot to perform the action (504).”. The cited passages clearly shows that the system is configured to save, in memory, sensor data corresponding to human-robot interaction. The cited passages further show that the robot includes sensors configured to detect human actions.),
is correlated with presence or absence of a predetermined action from outside to the robot within a predetermined period including the certain time point (Cohen: Column 3 lines 61-67, “Receiving the sensor data includes receiving sensor data for a time period before the robot performs the action and a time period after the robot performs the action.”, Column 10 lines 45-62, “The sensor data for the various sensors may be time-stamped or synchronized so that different types of sensor data can be mapped together to indicate different observed parameters that occur at the same time.”, Column 13 lines 11-30, “For example, features may be determined based on data captured in a particular amount of time, e.g., the previous second, previous 5 seconds, previous minute, etc., or based on a number of measurements, e.g., the previous 5 measurements, the previous 50 measurements, etc.”, Column 16 line 62 – Column 17 line 6, “Sensor data corresponding to human-robot interactions is received (502). The sensor data can indicate interactions in which a robot attempted to obtain assistance from a person to perform an action. The sensor data can be accompanied by data indicating other information related to the interaction, such as the particular action that was targeted, the type of communication initiated by the robot, and so on. In addition to or instead of sensor data, one or more scores derived from sensor data can be obtained. For example, scores indicating how many people were present, activities and attributes of the people present, environmental factors, and other data can be obtained.”, Column 17 lines 6-9, “Result data is obtained, where the result data indicates whether each of the human-robot interactions resulted in a person assisting a robot to perform the action (504).”. The cited passages clearly shows that the sensor data is associated with whether a human performed an action to the robot and that this sensor data is correlated to time, and can be take over a predetermined time period.); and
based on pieces of correspondence information stored in the storage and corresponding to time points different from one another, the pieces of correspondence information each being the piece of correspondence information, make a prediction for an action that the robot receives from the user at a specific time point after the time points (Cohen: Column 4 lines 51-67, “Another innovative aspect of the subject matter described in this specification is embodied in methods that include the actions of: receiving, by the one or more computing devices, sensor data corresponding to human-robot interactions in which a robot attempted to obtain assistance from a human to perform an action; receiving, by the one or more computing devices, result data indicating whether each of the human-robot interactions resulted in a human assisting a robot to perform the action; training, by the one or more computing devices, a predictive model based on the sensor data and the result data to indicate, in response to input data describing a human near a robot, a likelihood that the human will perform the action if the robot initiates communication with the human; and providing, by the one or more computing devices, the predictive model to a robot, the robot being configured to use the predictive model to select people to interact with to perform the action.”, Column 7 lines 22-41, “Not only can the machine learning model indicate a user's disposition to communication, the machine learning model can learn the capability to predict a likelihood that interaction initiated by the robot 110 will result in a specific type of action by a person, e.g., orienting the robot, loading an object onto or unloading an object from the robot, providing a desired type of information, and so on.”, Column 17 lines 10-14, “A predictive model is trained based on the sensor data and the result data (506). The predictive model is trained to indicate, in response to input data describing a person near a robot, a likelihood that the human will performing the action if the robot initiates communication with the person.”. The cited passages clearly shows that the system is configured to make a prediction on whether or not a human will interact with the robot based on the correspondence data.).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 8 and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 10898999 B1 ("Cohen") in view of US 2022/0009103 A1 ("Buerkle").
Regarding claim 1, Cohen does not teach wherein the processor performs a predetermined regression analysis using, as explanatory variables, detection results by sensors each being the sensor, the detection results being included in each of the pieces of correspondence information, thereby deriving a regression formula for the probability that the robot receives the action, and
derives the probability based on the derived regression formula and detection results by the sensors at a time point corresponding to the specific time point.
Buerkle, in the same field of endeavor, teaches wherein the processor performs a predetermined regression analysis using, as explanatory variables, detection results by sensors each being the sensor, the detection results being included in each of the pieces of correspondence information, thereby deriving a regression formula for the probability that the robot receives the action (Buerkle: ¶ 0152, “The systems and methods of the disclosure may utilize one or more machine learning models to perform corresponding functions of the agent (or other functions described herein). The term “model” as, for example, used herein may be understood as any kind of algorithm, which provides output data from input data (e.g., any kind of algorithm generating or calculating output data from input data). A machine learning model may be executed by a computing system to progressively improve performance of a specific task. According to the disclosure, parameters of a machine learning model may be adjusted during a training phase based on training data. A trained machine learning model may then be used during an inference phase to make predictions or decisions based on input data.”, ¶ 0154, “In supervised learning, the model may be built using a training set of data that contains both the inputs and corresponding desired outputs. Each training instance may include one or more inputs and a desired output. Training may include iterating through training instances and using an objective function to teach the model to predict the output for new inputs. In semi-supervised learning, a portion of the inputs in the training set may be missing the desired outputs.”, ¶ 0157, “he systems and methods of the disclosure may utilize one or more classification models. In a classification model, the outputs may be restricted to a limited set of values (e.g., one or more classes). The classification model may output a class for an input set of one or more input values. An input set may include road condition data, event data, sensor data, such as image data, radar data, LIDAR data and the like, and/or other data as would be understood by one of ordinary skill in the art. A classification model as described herein may, for example, classify certain driving conditions and/or environmental conditions, such as weather conditions, road conditions, and the like. References herein to classification models may contemplate a model that implements, e.g., any one or more of the following techniques: linear classifiers (e.g., logistic regression or naive Bayes classifier), support vector machines, decision trees, boosted trees, random forest, neural networks, or nearest neighbor.”. The cited passages clearly shows that the system is configured to use a linear regression model to make prediction based on sensor data captured by sensors mounted on the robot. Additionally, one of ordinary skill in the art would have recognized that training a logistic regression model involves providing the model with inputs and desired outputs in order for the model to determine the proper weight of the logistic regression model. Therefore, the cited passages clearly teaches deriving a logistic regression formula.), and
derives the probability based on the derived regression formula and detection results by the sensors at a time point corresponding to the specific time point (Buerkle: ¶ 0090, “Another example (e.g. example 8) relates to a previously-described example (e.g. one or more of examples 1-7), wherein: the processor is configured to estimate a risk of harm to the human based on a collision probability of the one or more other autonomous agents with the human; the collision probability is determined based on a distance of the human to the one or more other autonomous agents and a behavior certainty score; and the behavior certainty score is determined based on a current movement of the human and a planned path of the autonomous agent through the environment.”, ¶ 0152, “The systems and methods of the disclosure may utilize one or more machine learning models to perform corresponding functions of the agent (or other functions described herein). The term “model” as, for example, used herein may be understood as any kind of algorithm, which provides output data from input data (e.g., any kind of algorithm generating or calculating output data from input data). A machine learning model may be executed by a computing system to progressively improve performance of a specific task. According to the disclosure, parameters of a machine learning model may be adjusted during a training phase based on training data. A trained machine learning model may then be used during an inference phase to make predictions or decisions based on input data.”, ¶ 0157, “he systems and methods of the disclosure may utilize one or more classification models. In a classification model, the outputs may be restricted to a limited set of values (e.g., one or more classes). The classification model may output a class for an input set of one or more input values. An input set may include road condition data, event data, sensor data, such as image data, radar data, LIDAR data and the like, and/or other data as would be understood by one of ordinary skill in the art. A classification model as described herein may, for example, classify certain driving conditions and/or environmental conditions, such as weather conditions, road conditions, and the like. References herein to classification models may contemplate a model that implements, e.g., any one or more of the following techniques: linear classifiers (e.g., logistic regression or naive Bayes classifier), support vector machines, decision trees, boosted trees, random forest, neural networks, or nearest neighbor.”. The cited passages clearly show that the system is configured to use a logistic regression model to determine a probability.).
Cohen teaches an information processing device comprising a processor that stores, in a storage, a piece of correspondence information in which a detection result by a sensor at a certain time point, the sensor being included in a robot for detecting at least one action from a user, is correlated with presence or absence of a predetermined action from outside to the robot within a predetermined period including the certain time point, and based on pieces of correspondence information stored in the storage and corresponding to time points different from one another, the pieces of correspondence information each being the piece of correspondence information, makes a prediction for an action that the robot receives from the user at a specific time point after the time points. Cohen does not teach wherein the processor performs a predetermined regression analysis using, as explanatory variables, detection results by sensors each being the sensor, the detection results being included in each of the pieces of correspondence information, thereby deriving a regression formula for the probability that the robot receives the action, and derives the probability based on the derived regression formula and detection results by the sensors at a time point corresponding to the specific time point. Buerkle teaches wherein the processor performs a predetermined regression analysis using, as explanatory variables, detection results by sensors each being the sensor, the detection results being included in each of the pieces of correspondence information, thereby deriving a regression formula for the probability that the robot receives the action, and derives the probability based on the derived regression formula and detection results by the sensors at a time point corresponding to the specific time point. A person of ordinary skill in the art would have had the technological capabilities required to have modified the device taught in Cohen with wherein the processor performs a predetermined regression analysis using, as explanatory variables, detection results by sensors each being the sensor, the detection results being included in each of the pieces of correspondence information, thereby deriving a regression formula for the probability that the robot receives the action, and derives the probability based on the derived regression formula and detection results by the sensors at a time point corresponding to the specific time point taught in Buerkle. Furthermore, the device taught in Cohen already teaches using a machine learning algorithm to calculate a probability that a human will interact with the robot. Additionally, logistic regression is a common and known algorithm that would have been well within the technological knowledge of a person of ordinary skill in the art. As such, a person of ordinary skill in the art would have been able to modify the machine learning algorithm taught in Cohen to use logistic regression as taught in Buerkle according to known methods. Such a modification would not have changed or introduced new functionality. No inventive effort would have been required. The combination would have yielded the predictable result of a information processing device comprising wherein the processor performs a predetermined regression analysis using, as explanatory variables, detection results by sensors each being the sensor, the detection results being included in each of the pieces of correspondence information, thereby deriving a regression formula for the probability that the robot receives the action, and derives the probability based on the derived regression formula and detection results by the sensors at a time point corresponding to the specific time point.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the information processing device taught in Cohen with wherein the processor performs a predetermined regression analysis using, as explanatory variables, detection results by sensors each being the sensor, the detection results being included in each of the pieces of correspondence information, thereby deriving a regression formula for the probability that the robot receives the action, and derives the probability based on the derived regression formula and detection results by the sensors at a time point corresponding to the specific time point taught in Buerkle with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because the combination would have yielded predictable results.
Regarding claim 9, Cohen in view of Buerkle teaches wherein the processor derives the regression formula in response to a predetermined minimum number of pieces of correspondence information or more being stored in the storage (¶ 0152, “The systems and methods of the disclosure may utilize one or more machine learning models to perform corresponding functions of the agent (or other functions described herein). The term “model” as, for example, used herein may be understood as any kind of algorithm, which provides output data from input data (e.g., any kind of algorithm generating or calculating output data from input data). A machine learning model may be executed by a computing system to progressively improve performance of a specific task. According to the disclosure, parameters of a machine learning model may be adjusted during a training phase based on training data. A trained machine learning model may then be used during an inference phase to make predictions or decisions based on input data.”, ¶ 0154, “In supervised learning, the model may be built using a training set of data that contains both the inputs and corresponding desired outputs. Each training instance may include one or more inputs and a desired output. Training may include iterating through training instances and using an objective function to teach the model to predict the output for new inputs. In semi-supervised learning, a portion of the inputs in the training set may be missing the desired outputs.”, ¶ 0157, “he systems and methods of the disclosure may utilize one or more classification models. In a classification model, the outputs may be restricted to a limited set of values (e.g., one or more classes). The classification model may output a class for an input set of one or more input values. An input set may include road condition data, event data, sensor data, such as image data, radar data, LIDAR data and the like, and/or other data as would be understood by one of ordinary skill in the art. A classification model as described herein may, for example, classify certain driving conditions and/or environmental conditions, such as weather conditions, road conditions, and the like. References herein to classification models may contemplate a model that implements, e.g., any one or more of the following techniques: linear classifiers (e.g., logistic regression or naive Bayes classifier), support vector machines, decision trees, boosted trees, random forest, neural networks, or nearest neighbor.”. One of ordinary skill in the art would have recognized that, in order to solve for the weights of a logistic regression equation, at least one data point for each input into the logistic regression equation is necessary to solve for said weights. Therefore, because the system teaches training a logistic network based on multiple sensor inputs, this limitation is taught.), and
after deriving the regression formula, each time the processor stores a piece of correspondence information in the storage, updates the regression formula based on a plurality of pieces of correspondence information stored in the storage including the piece of correspondence information most recently stored (Cohen: Column 15 lines 33-52, “In some implementations, the interaction estimation model 130 includes or is generated using set of heuristics 320. The heuristics 320 may represent rules or policies for interpreting sensor data. As an example, one heuristic may assess motion of a robot and motion of a user. It may indicate that if a person backs away as the robot approaches, a decreased likelihood of successful interaction should be provided. This type of determination based on known for expected signals maybe used to initially operate the model 130. As more training data is acquired, examples may prove or disprove the predictive value of individual heuristics. As a result, data acquired from various robots in different locations maybe used to incrementally update the model 130 and learn which heuristics 320 are most accurately predicting outcomes and which are not. The heuristics 320 may be altered over time based on the increased learning of the system, for example, to remove or alter low-performing heuristics, or to decrease the influence of the heuristics 320 as the model 130 training proceeds and provides more accurate output.”. The cited passages clearly shows that the machine learning model is continuously updated with newly acquired training data.).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Noah W Stiebritz whose telephone number is (571)272-3414. The examiner can normally be reached Monday thru Friday 7-5 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached at (571) 270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/N.W.S./Examiner, Art Unit 3658
/Ramon A. Mercado/Supervisory Patent Examiner, Art Unit 3658