DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice for all US Patent Applications filed on or after March 16, 2013
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Status of the Claims
This communication is in response to communications received on 8/19/25. Claim(s) independent claims 1, 8 and 15 and dependent claims 2, 3, 5-7, 9, 10, 12-14, 16, 17, 19, and 20 is/are amended, claim(s) none is/are cancelled, claim(s) none is/are new, and applicant states support can be found at instant specification Figs. 1, 2A, 2B, and [0001, 0014]. Therefore, Claims 1-20 is/are pending and have been addressed below.
Response to Arguments
Applicant’s arguments, see applicant’s remarks, filed 8/19/25, with respect to rejections under 35 USC 101 for claim(s) 1-20 have been fully considered but they are not persuasive as far as they apply to the amended 101 rejection(s) below.
Applicant respectfully traversed the rejection on pg. 1.
The Examiner respectfully disagrees because the invention as claimed under its broadest reasonable interpretation, is an abstract idea directed to dynamic pairing of a device and an individual. The abstract idea is a method of organizing human activity (commercial or legal interactions including advertising, marketing or sales activities or behaviors, or business relations) because the invention is directed to economic and/or business relationships as they are associated with dynamic pairing of a device and an individual to perform a task.
Thus, the argument(s) are unpersuasive.
Applicant’s arguments, see applicant’s remarks, filed 8/19/25, with respect to rejections under 35 USC 102 and 103 for claim(s) 1-20 have been fully considered but they are not persuasive as far as they apply to the amended 103 rejection(s) below.
Applicant respectfully traversed the rejection on pg. 2-6.
The Examiner respectfully disagrees because regarding the pairing limitation applicant argues Nath doesn’t teach pairing the first task data and the device with a first individual citing. While applicant has now addressed [0017-0018], the argument is still not persuasive as the office action interpretation of [0017-0018] where the virtual worker is the device is not addressed. The office action states “[0017] workers (individuals) can be human or virtual … ; [0018] tasks can be assigned to groups of consisting of human or virtual works based on the worker profile thus matching human worker to task and virtual worker (device)”.
Thus, the argument(s) are unpersuasive.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter as noted below.
The limitation(s) below for representative claim(s) 1, 8, and 15 that, under its broadest reasonable interpretation, is directed to dynamic pairing of a device and an individual.
Step 1: The claim(s) as drafted, is/are a process (claim(s) 8-14 recites a series of steps) and system (claim(s) 1-7 and 15-20 recites a series of components).
Step 2A – Prong 1: The claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) (emphasis added):
Claim 8: receiving first task data indicative of a first set of tasks to be executed by a human individual among one or more human individuals, device data indicative of identification information of a device, and data of the one or more human individuals;
selecting at least one attribute of the data of the one or more human individuals;
weighting the selected at least one attribute of the data of the one or more human individuals;
determining a profile and a profile value for each of the one or more human individuals based on the weighted at least one attribute of the data of the one or more human individuals; and
pairing the first task data and the with a first human individual from among the one or more human individuals based on the determined profile value for each of the one or more human individuals,
storing the profile of the first human individual with an associated feature set including the first task data, the data of the first human individual, and the device data, and a target indicative of a pairing status,
training a model on the stored profile of the first human individual associated with the feature set and target and historical data indicative of a set of profiles associated with feature sets and targets, and
generating, utilizing the trained model, an output for a profile of another human individual.
Claim 1 and 15: the same analysis as claim(s) 8.
Dependent claims 2-7, 9-14, and 16-20 recite the same or similar abstract idea(s) as independent claim(s) 1, 8, and 15 with merely a further narrowing of the abstract idea(s): .
The identified limitations of the independent and dependent claims above fall well-within the groupings of subject matter identified by the courts as being abstract concepts of:
a method of organizing human activity (commercial or legal interactions including advertising, marketing or sales activities or behaviors, or business relations) because the invention is directed to economic and/or business relationships as they are associated with dynamic pairing of a device and an individual to perform a task.
Step 2A – Prong 2: This judicial exception is not integrated into a practical application because:
The additional elements unencompassed by the abstract idea include device, machine learning model (claim(s) 1, 8, and 15), system comprising memory and processor (claim(s) 1), non-transitory computer readable medium processor (claim(s) 15), device (claim(s) 2, 7, 9, 14, 16, 19), robot, AMR, drone (claims(s) 5, 12, 19), a non-transitory computer-readable medium having stored thereon computer-readable instructions executable by a processor (claim(s) 15).
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements as described above with respect to Step 2A Prong 2 fails to describe:
Improvements to the functioning of a computer, or to any other technology or technical field - see MPEP 2106.05(a)
Applying or using a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition – see Vanda Memo
Applying the judicial exception with, or by use of, a particular machine – see MPEP 2106.05(b)
Effecting a transformation or reduction of a particular article to a different state or thing - see MPEP 2106.05(c)
Applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception - see MPEP 2106.05(e) and Vanda Memo.
Thus the additional elements as described above with respect to Step 2A Prong 2 merely amount to (as additionally noted by instant specification [0025-0027]) invoked as a tool and/or general purpose computer to apply instructions of an abstract idea in a particular technological environment, and/or mere application of an abstract idea in a particular technological environment and merely limiting the use of an abstract idea to a particular technological field do not integrate an abstract idea into a practical application (MPEP 2106.05(f)&(h)).
Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Thus the additional elements as described above with respect to Step 2A Prong 2 merely amount to (as additionally noted by instant specification [0025-0027]) invoked as a tool and/or a general purpose computer to apply instructions of an abstract idea in a particular technological environment, and/or mere application of an abstract idea in a particular technological environment and merely limiting the use of an abstract idea to a particular technological field do not integrate an abstract idea into a practical application and thus similarly the combination and arrangement of the above identified additional elements when analyzed under Step 2B also fails to necessitate a conclusion that the claims amount to significantly more than the abstract idea for the same reasons as set forth above (MPEP 2106.05(f)&(h)).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-5, 7-12, and 14-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Nath et al. (US 2015/0317582 A1).
Regarding claim 1, 8, and 15 (currently amended), Nath teaches a method for pairing a device and a human individual, comprising:
receiving first task data indicative of a first set of tasks to be executed by a human individual among one or more human individuals, device data indicative of identification information of a device, and data of the one or more human individuals [see at least [0026] “the processes enabled by the Context-Aware Crowdsourced Task Optimizer begin operation by using a task input module 100 to receive one or more tasks 105 from human or virtual task publishers (110, 115, 120). In addition, the task input module 100 also receives one or more optional task contexts, e.g., prices, location, deadlines, number of instances, etc.”;
[0017] “However, it should be understood that the processes enabled by the Context-Aware Crowdsourced Task Optimizer apply to both human and virtual workers. Examples of virtual workers include, but are not limited to, computers and applications or tasks running on those computers (including both fixed and mobile computing devices), robots, drones, driverless taxis, etc. As such, the discussion of “workers” in the following discussion should be understood to apply to both real and virtual workers.”;
[0018] “Further, it should also be understood that groups of human and/or virtual workers can also be “bundled” by the Context-Aware Crowdsourced Task Optimizer such that multiple workers can cooperate to perform particular tasks or task bundles. In the case of multiple workers acting as a group, the Context-Aware Crowdsourced Task Optimizer can learn a predictive worker model across the group as a whole, thereby effectively treating a group of multiple workers as a single “super worker” or the like. As such, recommending tasks to particular groups of workers is treated in the same way as recommending tasks to individual workers, with the primary difference being that the group entity will inherently have increased capacity to complete more tasks than individual workers.”;
[0145] receive worker (individual) data “The Context-Aware Crowdsourced Task Optimizer also receives (410) a separate machine-learned predictive worker model (415) for each of a plurality of workers in a worker pool. In addition, the Context-Aware Crowdsourced Task Optimizer receives (420) one or more current contexts (425) for each of the workers in the worker pool.”;
[0072] receive worker (individual) data “Note that in the case of new workers, where no history is currently available, factors or profile information such as worker age, gender, demographics, etc., can be used to bootstrap a default model which will then be updated over time as more information becomes available with respect to the worker's history regarding task recommendation acceptances, completions, etc. Further, in various embodiments, an online worker questionnaire or the like can be used to request initial worker preferences, profile information, task history, etc., for use in constructing or learning the initial worker model.”;
[0071, 0098, 0088] additional worker data “The Context-Aware Crowdsourced Task Optimizer automatically learns and models such worker preferences by analyzing the history of individual workers or particular groups or bundles of workers with respect to various tasks acceptances, task completions, task parameters including payment amounts, complexity or difficulty, distance, time, etc.” and “various worker contexts such as drone range, lifting capabilities, … the drone (or a human or other computer controlling the drone) can plan flight routes to allow the drone to complete the bundled tasks” and “any of a wide range of additional considerations or parameters (e.g., age, gender, fitness level, education, skills, worker's computing devices, tools, equipment, travel capabilities, quality reviews of worker or task result from task publisher, etc.)”];
selecting at least one attribute of the data of the one or more human individuals;
weighting the selected at least one attribute of the data of the one or more human individuals;
determining a profile and a profile value for each of the one or more human individuals based on the weighted at least one attribute of the data of the one or more human individuals [for the limitations above, see at least [0021] attribute data “Advantageously the Context-Aware Crowdsourced Task Optimizer can use any of a large number of optimization algorithms or processes to solve this optimization problem, e.g., greedy algorithms, expectation-maximization algorithms, etc. For example, as is well known to those skilled in the art, in mathematical optimization, constrained optimization … Some of the constraints considered by the Context-Aware Crowdsourced Task Optimizer include, but are not limited to, available workers, present and future contexts of those available workers, … etc.”;
[0076] weights of attribute data, profile, and profile value “In view of the considerations discussed above regarding data modeling and worker observations, the probability of a worker ω completing a particular task τ can be denoted by Pτω (y=1|x; θ), where y indicates whether the worker completes a task (y=1), or not (y=0), vector x=(x1, x2, . . . ) includes real-time parameters (such as payment, distance, task complexity, etc.) and θω=(θ1, θ2, θ3, . . . ) is the learned coefficients/weights corresponding to those parameters for each particular worker. In other words, as noted above, the predictive worker model θω for each worker is a parameter vector that is learned using regression from task history of each corresponding worker ω. The following discussion expands these concepts.”;
[0089-0090] weights of attribute data, profile, and profile value are continually updated “2.3.3 Model Updates: The learned worker models are updated over time as more data is collected for each worker. Periodic, continuous, or real-time updates to these models over time ensures that the Context-Aware Crowdsourced Task Optimizer has the ability to provide accurate and up to date estimations of workers' predicted behaviors regarding recommended tasks or task bundles.”;
[0028] profile value and pairing task to worker (individual) based on profile value “Once one or more tasks 105 have been received from any of the task publishers (110, 115, 120), a task recommendation module 130 evaluates those tasks, any associated task contexts, learned worker models 135 for one or more human or virtual workers (140, 145, 150, 155) in a worker pool 160, and current and future worker contexts (165 and 170, respectively) to construct task bundles that optimize completion rates and pricing relative to one or more particular workers in the worker pool.”]; and
pairing the first task data and the with a first human individual from among the one or more human individuals based on the determined profile value for each of the one or more human individuals [see at least [0028] profile value and pairing task to worker (individual) based on profile value “Once one or more tasks 105 have been received from any of the task publishers (110, 115, 120), a task recommendation module 130 evaluates those tasks, any associated task contexts, learned worker models 135 for one or more human or virtual workers (140, 145, 150, 155) in a worker pool 160, and current and future worker contexts (165 and 170, respectively) to construct task bundles that optimize completion rates and pricing relative to one or more particular workers in the worker pool.”; [0146] “The Context-Aware Crowdsourced Task Optimizer then evaluates (430) the tasks (405) and any associated task contexts, the worker models (415), the current contexts (425), and optional future contexts (440) to construct optimized bundles of one or more tasks”;
[0017] workers (individuals) can be human or virtual “However, it should be understood that the processes enabled by the Context-Aware Crowdsourced Task Optimizer apply to both human and virtual workers. Examples of virtual workers include, but are not limited to, computers and applications or tasks running on those computers (including both fixed and mobile computing devices), robots, drones, driverless taxis, etc. As such, the discussion of “workers” in the following discussion should be understood to apply to both real and virtual workers.”;
[0018] tasks can be assigned to groups of consisting of human or virtual works based on the worker profile thus matching human worker to task and virtual worker (device) “Further, it should also be understood that groups of human and/or virtual workers can also be “bundled” by the Context-Aware Crowdsourced Task Optimizer such that multiple workers can cooperate to perform particular tasks or task bundles. In the case of multiple workers acting as a group, the Context-Aware Crowdsourced Task Optimizer can learn a predictive worker model across the group as a whole, thereby effectively treating a group of multiple workers as a single “super worker” or the like. As such, recommending tasks to particular groups of workers is treated in the same way as recommending tasks to individual workers, with the primary difference being that the group entity will inherently have increased capacity to complete more tasks than individual workers.”],
storing the profile of the first human individual with an associated feature set including the first task data, the data of the first human individual, and the device data, and a target indicative of a pairing status [see at least [0030] “A context update module 185 is used to evaluate sensor data from workers' devices, and/or user input to determine and update the current and future worker contexts (165 and 170, respectively) whenever additional worker context information becomes available”],
training a model on the stored profile of the first human individual associated with the feature set and target and historical data indicative of a set of profiles associated with feature sets and targets [see at least [0029] “In general, the aforementioned learned worker models 135 are generated by a worker model generation module 180 that uses any of a wide variety of machine learning techniques to generate machine-learned models for each worker (or worker group). The worker model generation module 180 also update the learned worker models 135 over time when additional observations (e.g., task completions, worker history, etc.) become available.”;
[0030] “A context update module 185 is used to evaluate sensor data from workers' devices, and/or user input to determine and update the current and future worker contexts (165 and 170, respectively) whenever additional worker context information becomes available”], and
generating, utilizing the trained model, an output for a profile of another human individual [see at least [0071-0072] “The Context-Aware Crowdsourced Task Optimizer automatically learns and models such worker preferences by analyzing the history of individual workers or particular groups or bundles of workers with respect to various tasks acceptances, task completions, task parameters including payment amounts, complexity or difficulty, distance, time, etc. The learned model for each worker is then used by the Context-Aware Crowdsourced Task Optimizer to evaluate the likelihood that a particular worker will successfully complete a future task.
Note that in the case of new workers, where no history is currently available, factors or profile information such as worker age, gender, demographics, etc., can be used to bootstrap a default model which will then be updated over time as more information becomes available with respect to the worker's history regarding task recommendation acceptances, completions, etc. Further, in various embodiments, an online worker questionnaire or the like can be used to request initial worker preferences, profile information, task history, etc., for use in constructing or learning the initial worker model.”].
Regarding claim 2, 9, and 16 (currently amended), Nath teaches the method of claim 8, further comprising:
receiving second task data indicative of a second set of tasks to be executed by a human individual from among the one or more individuals, the device data indicative of identification information of the device, and the data of the one or more individuals [see at least [0026] “the processes enabled by the Context-Aware Crowdsourced Task Optimizer begin operation by using a task input module 100 to receive one or more tasks 105 from human or virtual task publishers (110, 115, 120). In addition, the task input module 100 also receives one or more optional task contexts, e.g., prices, location, deadlines, number of instances, etc.”;
[0017] “However, it should be understood that the processes enabled by the Context-Aware Crowdsourced Task Optimizer apply to both human and virtual workers. Examples of virtual workers include, but are not limited to, computers and applications or tasks running on those computers (including both fixed and mobile computing devices), robots, drones, driverless taxis, etc. As such, the discussion of “workers” in the following discussion should be understood to apply to both real and virtual workers.”;
[0018] “Further, it should also be understood that groups of human and/or virtual workers can also be “bundled” by the Context-Aware Crowdsourced Task Optimizer such that multiple workers can cooperate to perform particular tasks or task bundles. In the case of multiple workers acting as a group, the Context-Aware Crowdsourced Task Optimizer can learn a predictive worker model across the group as a whole, thereby effectively treating a group of multiple workers as a single “super worker” or the like. As such, recommending tasks to particular groups of workers is treated in the same way as recommending tasks to individual workers, with the primary difference being that the group entity will inherently have increased capacity to complete more tasks than individual workers.”;
[0145] receive worker (individual) data “The Context-Aware Crowdsourced Task Optimizer also receives (410) a separate machine-learned predictive worker model (415) for each of a plurality of workers in a worker pool. In addition, the Context-Aware Crowdsourced Task Optimizer receives (420) one or more current contexts (425) for each of the workers in the worker pool.”;
[0072] receive worker (individual) data “Note that in the case of new workers, where no history is currently available, factors or profile information such as worker age, gender, demographics, etc., can be used to bootstrap a default model which will then be updated over time as more information becomes available with respect to the worker's history regarding task recommendation acceptances, completions, etc. Further, in various embodiments, an online worker questionnaire or the like can be used to request initial worker preferences, profile information, task history, etc., for use in constructing or learning the initial worker model.”;
[0071, 0098, 0088] additional worker data “The Context-Aware Crowdsourced Task Optimizer automatically learns and models such worker preferences by analyzing the history of individual workers or particular groups or bundles of workers with respect to various tasks acceptances, task completions, task parameters including payment amounts, complexity or difficulty, distance, time, etc.” and “various worker contexts such as drone range, lifting capabilities, … the drone (or a human or other computer controlling the drone) can plan flight routes to allow the drone to complete the bundled tasks” and “any of a wide range of additional considerations or parameters (e.g., age, gender, fitness level, education, skills, worker's computing devices, tools, equipment, travel capabilities, quality reviews of worker or task result from task publisher, etc.)”];
selecting the at least one attribute of the data of the one or more human individuals;
weighting the selected at least one attribute of the data of the one or more human individuals;
updating the profile and the profile value for each of the one or more human individuals based on the weighted at least one attribute of the data of the one or more human individuals [for the limitations above, see at least [0021] attribute data “Advantageously the Context-Aware Crowdsourced Task Optimizer can use any of a large number of optimization algorithms or processes to solve this optimization problem, e.g., greedy algorithms, expectation-maximization algorithms, etc. For example, as is well known to those skilled in the art, in mathematical optimization, constrained optimization … Some of the constraints considered by the Context-Aware Crowdsourced Task Optimizer include, but are not limited to, available workers, present and future contexts of those available workers, … etc.”;
[0076] weights of attribute data, profile, and profile value “In view of the considerations discussed above regarding data modeling and worker observations, the probability of a worker ω completing a particular task τ can be denoted by Pτω (y=1|x; θ), where y indicates whether the worker completes a task (y=1), or not (y=0), vector x=(x1, x2, . . . ) includes real-time parameters (such as payment, distance, task complexity, etc.) and θω=(θ1, θ2, θ3, . . . ) is the learned coefficients/weights corresponding to those parameters for each particular worker. In other words, as noted above, the predictive worker model θω for each worker is a parameter vector that is learned using regression from task history of each corresponding worker ω. The following discussion expands these concepts.”;
[0089-0090] weights of attribute data, profile, and profile value are continually updated “2.3.3 Model Updates: The learned worker models are updated over time as more data is collected for each worker. Periodic, continuous, or real-time updates to these models over time ensures that the Context-Aware Crowdsourced Task Optimizer has the ability to provide accurate and up to date estimations of workers' predicted behaviors regarding recommended tasks or task bundles.”;
[0028] profile value and pairing task to worker (individual) based on profile value “Once one or more tasks 105 have been received from any of the task publishers (110, 115, 120), a task recommendation module 130 evaluates those tasks, any associated task contexts, learned worker models 135 for one or more human or virtual workers (140, 145, 150, 155) in a worker pool 160, and current and future worker contexts (165 and 170, respectively) to construct task bundles that optimize completion rates and pricing relative to one or more particular workers in the worker pool.”]; and
pairing the second task data and the device with a second human individual, different from the first human individual, from among the one or more human individuals based on the updated profile value for each of the one or more human individuals [see at least [0028] profile value and pairing task to worker (individual) based on profile value “Once one or more tasks 105 have been received from any of the task publishers (110, 115, 120), a task recommendation module 130 evaluates those tasks, any associated task contexts, learned worker models 135 for one or more human or virtual workers (140, 145, 150, 155) in a worker pool 160, and current and future worker contexts (165 and 170, respectively) to construct task bundles that optimize completion rates and pricing relative to one or more particular workers in the worker pool.”; [0146] “The Context-Aware Crowdsourced Task Optimizer then evaluates (430) the tasks (405) and any associated task contexts, the worker models (415), the current contexts (425), and optional future contexts (440) to construct optimized bundles of one or more tasks”;
[0017] workers (individuals) can be human or virtual “However, it should be understood that the processes enabled by the Context-Aware Crowdsourced Task Optimizer apply to both human and virtual workers. Examples of virtual workers include, but are not limited to, computers and applications or tasks running on those computers (including both fixed and mobile computing devices), robots, drones, driverless taxis, etc. As such, the discussion of “workers” in the following discussion should be understood to apply to both real and virtual workers.”;
[0018] tasks can be assigned to groups of consisting of human or virtual works based on the worker profile thus matching human worker to task and virtual worker (device) “Further, it should also be understood that groups of human and/or virtual workers can also be “bundled” by the Context-Aware Crowdsourced Task Optimizer such that multiple workers can cooperate to perform particular tasks or task bundles. In the case of multiple workers acting as a group, the Context-Aware Crowdsourced Task Optimizer can learn a predictive worker model across the group as a whole, thereby effectively treating a group of multiple workers as a single “super worker” or the like. As such, recommending tasks to particular groups of workers is treated in the same way as recommending tasks to individual workers, with the primary difference being that the group entity will inherently have increased capacity to complete more tasks than individual workers.”].
Regarding claim 3, 10, and 17 (currently amended), Nath teaches the method of claim 9, further comprising
determining the profile and the profile value for each of the one or more human individuals based on the weighted at least one attribute of the data of the one or more human individuals before a commencement of a predetermined period, and
updating the profile and the profile value for each of the one or more human individuals based on the weighted selected at least one attribute of the data of the one or more human individuals during the predetermined period [for the limitations above, see at least [0028] “Once one or more tasks 105 have been received from any of the task publishers (110, 115, 120), a task recommendation module 130 evaluates those tasks, any associated task contexts, learned worker models 135 for one or more human or virtual workers (140, 145, 150, 155) in a worker pool 160, and current and future worker contexts (165 and 170, respectively) to construct task bundles that optimize completion rates and pricing relative to one or more particular workers in the worker pool. … The task recommendation module 130 then recommends or presents those task bundles to specific workers. In the case that workers receiving task or task bundle recommendations do not accept the recommended tasks within some period time (e.g., a task acceptance round), the task recommendation module will then recommend some or all of those tasks to alternate workers in subsequent rounds. Similarly, if accepted tasks are not completed by workers within some predefined period of time, those tasks will be withdrawn from the accepting worker and offered or recommended to alternate workers for completion.”;
[0147] “In addition, in various embodiments, the Context-Aware Crowdsourced Task Optimizer updates (445) worker models (415) as additional worker history becomes available with respect to task acceptances, task completions, payment demands, etc.”].
Regarding claim 4, 11, and 18, Nath teaches the method of claim 8, wherein the at least one attribute is one or more of a route distance of one or more completed sets of tasks, a total distance traveled during a predetermined time period, an average speed, a number of completed sets of tasks during the predetermined time period, a number of rest periods during the predetermined time period, a navigated elevation change, a work environment, and energy expenditure associated with the set of tasks [see at least [0071] “The Context-Aware Crowdsourced Task Optimizer automatically learns and models such worker preferences by analyzing the history of individual workers or particular groups or bundles of workers with respect to various tasks acceptances, task completions, task parameters including payment amounts, complexity or difficulty, distance, time, etc.”].
Regarding claim 5, 12, and 19 (currently amended), Nath teaches the method of claim 8, wherein
the device is one of a robot, an autonomous mobile robot (AMR) and a drone, and
the device assists the first human individual with executing the first task data indicative of the first set of tasks [for the limitations above, see at least [0028] profile value and pairing task to worker (individual) based on profile value “Once one or more tasks 105 have been received from any of the task publishers (110, 115, 120), a task recommendation module 130 evaluates those tasks, any associated task contexts, learned worker models 135 for one or more human or virtual workers (140, 145, 150, 155) in a worker pool 160, and current and future worker contexts (165 and 170, respectively) to construct task bundles that optimize completion rates and pricing relative to one or more particular workers in the worker pool.”; [0146] “The Context-Aware Crowdsourced Task Optimizer then evaluates (430) the tasks (405) and any associated task contexts, the worker models (415), the current contexts (425), and optional future contexts (440) to construct optimized bundles of one or more tasks”;
[0017] workers (individuals) can be human or virtual “However, it should be understood that the processes enabled by the Context-Aware Crowdsourced Task Optimizer apply to both human and virtual workers. Examples of virtual workers include, but are not limited to, computers and applications or tasks running on those computers (including both fixed and mobile computing devices), robots, drones, driverless taxis, etc. As such, the discussion of “workers” in the following discussion should be understood to apply to both real and virtual workers.”;
[0018] tasks can be assigned to groups of consisting of human or virtual works based on the worker profile thus matching human worker to task and virtual worker (device) “Further, it should also be understood that groups of human and/or virtual workers can also be “bundled” by the Context-Aware Crowdsourced Task Optimizer such that multiple workers can cooperate to perform particular tasks or task bundles. In the case of multiple workers acting as a group, the Context-Aware Crowdsourced Task Optimizer can learn a predictive worker model across the group as a whole, thereby effectively treating a group of multiple workers as a single “super worker” or the like. As such, recommending tasks to particular groups of workers is treated in the same way as recommending tasks to individual workers, with the primary difference being that the group entity will inherently have increased capacity to complete more tasks than individual workers.”].
Regarding claim 7, 14, and 20 (currently amended), Nath teaches the method of claim 8, further comprising pairing the first task data and the device with the first human individual from among the one or more human individuals based on a lowest determined profile value [see at least see at least [0028] profile value and pairing task to worker (individual) based on profile value to optimize pricing thus lowest profile value “Once one or more tasks 105 have been received from any of the task publishers (110, 115, 120), a task recommendation module 130 evaluates those tasks, any associated task contexts, learned worker models 135 for one or more human or virtual workers (140, 145, 150, 155) in a worker pool 160, and current and future worker contexts (165 and 170, respectively) to construct task bundles that optimize completion rates and pricing relative to one or more particular workers in the worker pool.”; [0146] “The Context-Aware Crowdsourced Task Optimizer then evaluates (430) the tasks (405) and any associated task contexts, the worker models (415), the current contexts (425), and optional future contexts (440) to construct optimized bundles of one or more tasks”;
[0017] workers (individuals) can be human or virtual “However, it should be understood that the processes enabled by the Context-Aware Crowdsourced Task Optimizer apply to both human and virtual workers. Examples of virtual workers include, but are not limited to, computers and applications or tasks running on those computers (including both fixed and mobile computing devices), robots, drones, driverless taxis, etc. As such, the discussion of “workers” in the following discussion should be understood to apply to both real and virtual workers.”;
[0018] tasks can be assigned to groups of consisting of human or virtual works based on the worker profile thus matching human worker to task and virtual worker (device) “Further, it should also be understood that groups of human and/or virtual workers can also be “bundled” by the Context-Aware Crowdsourced Task Optimizer such that multiple workers can cooperate to perform particular tasks or task bundles. In the case of multiple workers acting as a group, the Context-Aware Crowdsourced Task Optimizer can learn a predictive worker model across the group as a whole, thereby effectively treating a group of multiple workers as a single “super worker” or the like. As such, recommending tasks to particular groups of workers is treated in the same way as recommending tasks to individual workers, with the primary difference being that the group entity will inherently have increased capacity to complete more tasks than individual workers.”].
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
It has been held that a prior art reference must either be in the field of applicant’s endeavor or, if not, then be reasonably pertinent to the particular problem with which the applicant was concerned, in order to be relied upon as a basis for rejection of the claimed invention. See In re Oetiker, 977 F.2d 1443, 24 USPQ2d 1443 (Fed. Cir. 1992).
Claim(s) 6 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nath et al. (US 2015/0317582 A1) in view of Blakely et al. (US 2019/0385125 A1).
Regarding claim 6 and 13 (currently amended), Nath teaches the method of claim 8, further comprising weighting the selected at least one attribute of the data of the one or more human individuals based on a coefficient [see at least [0021] attribute data “Advantageously the Context-Aware Crowdsourced Task Optimizer can use any of a large number of optimization algorithms or processes to solve this optimization problem, e.g., greedy algorithms, expectation-maximization algorithms, etc. For example, as is well known to those skilled in the art, in mathematical optimization, constrained optimization … Some of the constraints considered by the Context-Aware Crowdsourced Task Optimizer include, but are not limited to, available workers, present and future contexts of those available workers, … etc.”;
[0076] weights of attribute data, profile, and profile value “In view of the considerations discussed above regarding data modeling and worker observations, the probability of a worker ω completing a particular task τ can be denoted by Pτω (y=1|x; θ), where y indicates whether the worker completes a task (y=1), or not (y=0), vector x=(x1, x2, . . . ) includes real-time parameters (such as payment, distance, task complexity, etc.) and θω=(θ1, θ2, θ3, . . . ) is the learned coefficients/weights corresponding to those parameters for each particular worker. In other words, as noted above, the predictive worker model θω for each worker is a parameter vector that is learned using regression from task history of each corresponding worker ω. The following discussion expands these concepts.”;
[0089-0090] weights of attribute data, profile, and profile value are continually updated “2.3.3 Model Updates: The learned worker models are updated over time as more data is collected for each worker. Periodic, continuous, or real-time updates to these models over time ensures that the Context-Aware Crowdsourced Task Optimizer has the ability to provide accurate and up to date estimations of workers' predicted behaviors regarding recommended tasks or task bundles.”;
[0028] profile value and pairing task to worker (individual) based on profile value “Once one or more tasks 105 have been received from any of the task publishers (110, 115, 120), a task recommendation module 130 evaluates those tasks, any associated task contexts, learned worker models 135 for one or more human or virtual workers (140, 145, 150, 155) in a worker pool 160, and current and future worker contexts (165 and 170, respectively) to construct task bundles that optimize completion rates and pricing relative to one or more particular workers in the worker pool.”].
Nath doesn’t/don’t explicitly teach but Blakely discloses
, further comprising weighting the selected at least one attribute of the data of the one or more individuals based on a predetermined coefficient [see at least [0063] “The worker matcher algorithm may compute worker score values for an individual worker using the following process. The selected attributes from the employer profile and/or job profile may be given a weighted score between 0 and 100 based on the number of attributes selected or based on a predetermined weight.”;
[0064] “For example, if 4 attributes are selected and equally weighted, then each attribute score weight for each attribute would be 0.25 (or 25%). For each selected attribute, the attribute values in the employer profile may be compared with the attribute values associated with the worker and/or job. Each attribute may have a “weight adjustment factor” which may be a value between zero and one. These values default to one, but may be tuned to give added weight or to reduce the weight of individual attributes”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Nath with Blakely to include the limitation(s) above as disclosed by Blakely. Nath (abstract and [0001-0002] ) teaches matching worker(s) to task(s) based on an a diverse set of factors and Blakely improves this by expanding how use the diverse set of factors such as by use of predetermined data [see at least Blakely [0003-0004, 0063-0064].
Furthermore, all of the claimed elements were known in the prior arts of a) Nath and b) Blakely and c) one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded predictable results to one of ordinary skill in the art before the effective filing date of the claimed invention.
Conclusion
When responding to the office action, any new claims and/or limitations should be accompanied by a reference as to where the new claims and/or limitations are supported in the original disclosure.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP §706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES WEBB whose telephone number is (313)446-6615. The examiner can normally be reached on M-F 10-3.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O’Connor can be reached on (571) 272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.W./Examiner, Art Unit 3624
/Jerry O'Connor/Supervisory Patent Examiner,Group Art Unit 3624