DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice for all US Patent Applications filed on or after March 16, 2013
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/1/25 has been entered.
Status of the Claims
This communication is in response to communications received on 12/1/25. Claim(s) 1, 8, and 15 is/are amended, claim(s) none is/are cancelled, claim(s) none is/are new, and applicant does not provide any information on where support for the amendments can be found in the instant specification. Therefore, Claims 1-20 is/are pending and have been addressed below.
Response to Arguments
Applicant’s arguments, see applicant’s remarks, filed 12/1/25, with respect to rejections under 35 USC 101 for claim(s) 1-20 have been fully considered but they are not persuasive as far as they apply to the amended 101 rejection(s) below.
Applicant respectfully traversed the rejection on pg. 9-11.
The Examiner respectfully disagrees because “apply it” item (2) Whether the claim invokes computers or other machinery merely as a tool to perform an existing process of 2106.05(f).
The claims here are not like those the Federal Circuit (Court) found patent eligible in McRO because the patent claims here do not address problems unique to claimed rules that enable automation of specific animation tasks that previously could not be automated. Additionally, the McRO court discusses the absence of preemption in determining that the claimed invention was not "directed to" a judicial exception. Other decisions, however, do not consider the absence of preemption as conferring patent eligibility (e.g., Synopsys, Fair Warning, Intellectual Ventures v. Symantec, Sequenom, and OIP). Furthermore the test is not preemption but the two step alice test.
The claims here are not like those the Federal Circuit found patent eligible in Enfish because the claimed steps are a process that qualifies as an abstract idea for which computers are invoked merely as a tool, rather than being directed to a specific asserted improvement in computer capabilities.
The claims here are not like those the Federal Circuit found patent eligible in Bascom because the claims do not have an inventive concept found in the non-conventional and non-generic arrangement of the additional elements.
Thus, the argument(s) are unpersuasive.
Applicant’s arguments, see applicant’s remarks, filed 12/1/25, with respect to rejections under 35 USC 103 for claim(s) 1-20 have been fully considered but they are not persuasive as far as they apply to the amended 103 rejection(s) below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter as noted below.
The limitation(s) below for representative claim(s) 1, 8, and 15 that, under its broadest reasonable interpretation, is directed to selecting workers to complete a task.
Step 1: The claim(s) as drafted, is/are a process (claim(s) 1-7 recites a series of steps and system (claim(s) 8-20 recites a series of components).
Step 2A – Prong 1: The claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) (emphasis added):
Claim 8: identifying a task to be completed by the hybrid workforce in the environment, wherein the hybrid workforce includes a plurality of human workers and a plurality of robotic workers;
classifying the task based on a suitability assessment for being completed by a human worker or by a robot worker;
obtaining a profile for each human worker in the plurality of human workers and for each robotic worker in the plurality of robotic workers;
creating a digital twin instance for each human worker and each robotic worker;
simulating the task using the digital twin instance for each respective worker using a physics-based or kinematic simulation executed by a hybrid workforce optimization module;
updating a capability of each human worker and each robotic worker to perform the task based on results of the simulation including updating at least one task-specific metric generated by the physics-based or kinematic simulation;
determining a capability to perform the task to be completed by the hybrid workforce for each human worker and for each robotic worker based on a combination of the profile and the updated, simulation-derived capability;
associating the task with a robotic worker in the plurality of robotic workers, wherein the robotic worker has a highest capability when the task is suitable for being completed by the robot according to the combined capability; and
displaying an assignment of the task to be completed by the hybrid workforce on a device associated with the environment, wherein the assignment includes a classification of the task and an association with the hybrid workforce based on the simulation-enhanced capability determination.
Claim(s) 1 and 15: same analysis as claim(s) 8.
Dependent claims 2-7, 9-14, and 16-20 recite the same or similar abstract idea(s) as independent claim(s) 1, 8, and 15 with merely a further narrowing of the abstract idea(s): .
The identified limitations of the independent and dependent claims above fall well-within the groupings of subject matter identified by the courts as being abstract concepts of:
a method of organizing human activity (commercial or legal interactions including advertising, marketing or sales activities or behaviors, or business relations) because the invention is directed to economic and/or business relationships as they are associated with selecting workers to complete a task.
Step 2A – Prong 2: This judicial exception is not integrated into a practical application because:
The additional elements unencompassed by the abstract idea include robotic, robot, device (claim(s) 1, 8, 15), computer (claim(s) 1), computer system, processor, computer-readable memories, non-transitory computer-readable tangible storage media (claim(s) 8), computer program product, a non-transitory computer-readable storage medium (claim(s) 15), robotic, robot (claim(s) 2, 5, 9, 14, 16, 19), machine learning model (claim(s) 7, 14).
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements as described above with respect to Step 2A Prong 2 fails to describe:
Improvements to the functioning of a computer, or to any other technology or technical field - see MPEP 2106.05(a)
Applying or using a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition – see Vanda Memo
Applying the judicial exception with, or by use of, a particular machine – see MPEP 2106.05(b)
Effecting a transformation or reduction of a particular article to a different state or thing - see MPEP 2106.05(c)
Applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception - see MPEP 2106.05(e) and Vanda Memo.
Thus the additional elements as described above with respect to Step 2A Prong 2 merely amount to (as additionally noted by instant specification [0018]) invoked as a tool and/or general purpose computer to apply instructions of an abstract idea in a particular technological environment, and/or mere application of an abstract idea in a particular technological environment and merely limiting the use of an abstract idea to a particular technological field do not integrate an abstract idea into a practical application (MPEP 2106.05(f)&(h)).
Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Thus the additional elements as described above with respect to Step 2A Prong 2 merely amount to (as additionally noted by instant specification [0018]) invoked as a tool and/or a general purpose computer to apply instructions of an abstract idea in a particular technological environment, and/or mere application of an abstract idea in a particular technological environment and merely limiting the use of an abstract idea to a particular technological field do not integrate an abstract idea into a practical application and thus similarly the combination and arrangement of the above identified additional elements when analyzed under Step 2B also fails to necessitate a conclusion that the claims amount to significantly more than the abstract idea for the same reasons as set forth above (MPEP 2106.05(f)&(h)).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
It has been held that a prior art reference must either be in the field of applicant’s endeavor or, if not, then be reasonably pertinent to the particular problem with which the applicant was concerned, in order to be relied upon as a basis for rejection of the claimed invention. See In re Oetiker, 977 F.2d 1443, 24 USPQ2d 1443 (Fed. Cir. 1992).
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nath et al. (US 2015/0317582 A1) in view of Akella et al. (US 2019/0138973 A1) and Gienger (US 2022/0314437 A1).
Regarding claim 1, 8, and 15 (currently amended), Nath teaches a computer-implemented method for optimizing a hybrid workforce to complete tasks in an environment, the method comprising
{a computer system for optimizing a hybrid workforce to complete tasks in an environment, the computer system comprising: one or more processors, one or more computer-readable memories, one or more computer-readable tangible storage media, and program instructions stored on at least one of the one or more tangible storage media for execution by at least one of the one or more processors via at least one of the one or more memories, wherein the computer system is capable of performing a method comprising: - claim 1}
{a computer program product for optimizing a hybrid workforce to complete tasks in an environment, the computer program product comprising: a computer-readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to perform a method comprising: - claim 15} [see at least
see at least Fig. 5 and [0152-0153] “the computational capability of computing device 500 is generally illustrated by one or more processing unit(s) 510, and may also include one or more GPUs 515, either or both in communication with system memory 520. … the simplified computing device 500 may also include other components, such as, … storage devices 560” ]:
identifying a task to be completed by the hybrid workforce in the environment, wherein the hybrid workforce includes a plurality of human workers and a plurality of robotic workers [see at least [0026-0028] “the processes enabled by the Context-Aware Crowdsourced Task Optimizer begin operation by using a task input module 100 to receive one or more tasks 105 from human or virtual task publishers (110, 115, 120). In addition, the task input module 100 also receives one or more optional task contexts, e.g., prices, location, deadlines, number of instances, etc.
In various embodiments, a task feedback module 125 compute completion rates for various contexts such as prices, deadlines, etc., and uses these completion rates to provide guidance to task publishers (110, 115, 120) for specifying various task contexts prior to publishing those tasks 105. Note that the concept of providing task context feedback to assist the task publishers in specifying or otherwise setting those contexts is discussed in further detail in Section 2.7 of this document.
Once one or more tasks 105 have been received from any of the task publishers (110, 115, 120), … any associated task contexts, learned worker models 135 for one or more human or virtual workers (140, 145, 150, 155) in a worker pool 160, and current and future worker contexts (165 and 170, respectively) to construct task bundles that optimize completion rates and pricing relative to one or more particular workers in the worker pool.”;
[0017] “However, it should be understood that the processes enabled by the Context-Aware Crowdsourced Task Optimizer apply to both human and virtual workers. Examples of virtual workers include, but are not limited to, computers and applications or tasks running on those computers (including both fixed and mobile computing devices), robots, drones, driverless taxis, etc. As such, the discussion of “workers” in the following discussion should be understood to apply to both real and virtual workers.”];
obtaining a profile for each human worker in the plurality of human workers and for each robotic worker in the plurality of robotic workers;
for each human worker and each robotic worker;
each human worker and each robotic worker;
determining a capability to perform the task to be completed by the hybrid workforce for each human worker and for each robotic worker based on a combination of the profile and other data [for the limitations above, see at least Fig. 5 and [0153] “the simplified computing device 500 may also include other components, such as, … storage devices 560”;
[0057] “Context-Aware Crowdsourced Task Optimizer receives inputs including historical, real-time and future context information of multiple workers, and task properties (such as location, payment, deadline, etc.) from one or more task publishers. Given these inputs, the Context-Aware Crowdsourced Task Optimizer outputs recommended assignments of bundles of tasks to active workers. In making these recommendations, the Context-Aware Crowdsourced Task Optimizer uses learned worker models to make recommendations in a way that meets various criteria, including maximizing task completion rates where task payments are fixed, jointly maximizing task completion rates while minimizing task payments using adaptive pricing, etc.”;
[0021] attribute data “Advantageously the Context-Aware Crowdsourced Task Optimizer can use any of a large number of optimization algorithms or processes to solve this optimization problem, e.g., greedy algorithms, expectation-maximization algorithms, etc. For example, as is well known to those skilled in the art, in mathematical optimization, constrained optimization … Some of the constraints considered by the Context-Aware Crowdsourced Task Optimizer include, but are not limited to, available workers, present and future contexts of those available workers, … etc.”;
[0076] weights of attribute data, profile, and profile value “In view of the considerations discussed above regarding data modeling and worker observations, the probability of a worker ω completing a particular task τ can be denoted by Pτω (y=1|x; θ), where y indicates whether the worker completes a task (y=1), or not (y=0), vector x=(x1, x2, . . . ) includes real-time parameters (such as payment, distance, task complexity, etc.) and θω=(θ1, θ2, θ3, . . . ) is the learned coefficients/weights corresponding to those parameters for each particular worker. In other words, as noted above, the predictive worker model θω for each worker is a parameter vector that is learned using regression from task history of each corresponding worker ω. The following discussion expands these concepts.”;
[0089-0090] weights of attribute data, profile, and profile value are continually updated “2.3.3 Model Updates: The learned worker models are updated over time as more data is collected for each worker. Periodic, continuous, or real-time updates to these models over time ensures that the Context-Aware Crowdsourced Task Optimizer has the ability to provide accurate and up to date estimations of workers' predicted behaviors regarding recommended tasks or task bundles.”];
associating the task with a robotic worker in the plurality of robotic workers, wherein the robotic worker has a highest capability when the task is suitable according to the combined capability [see at least [0057] “Context-Aware Crowdsourced Task Optimizer receives inputs including historical, real-time and future context information of multiple workers, and task properties (such as location, payment, deadline, etc.) from one or more task publishers. Given these inputs, the Context-Aware Crowdsourced Task Optimizer outputs recommended assignments of bundles of tasks to active workers. In making these recommendations, the Context-Aware Crowdsourced Task Optimizer uses learned worker models to make recommendations in a way that meets various criteria, including maximizing task completion rates where task payments are fixed, jointly maximizing task completion rates while minimizing task payments using adaptive pricing, etc.”].
Nath doesn’t/don’t explicitly teach but Akela discloses
classifying the task based on a suitability assessment for being completed by a human worker or by a robot worker;
associating the task with a robotic worker in the plurality of robotic workers, wherein the robotic worker has a highest capability when the task is suitable for being completed by the robot [for the limitations above, see at least [0006] tasks are classified and then assigned to workers (actors) based on classification (categories) including task requirements “Embodiments of the present invention provide a deep and continuous data set including process data, quality data, specific actor data, and ergonomic data (among others) to automatically determine job assignments that maximize efficiency, quality and actor safety. Using the data set, tasks may be assigned to actors based on objective statistical data such as skills, task requirements, ergonomics and time availability. Assigning tasks in this way can provide unique value for manufacturers who currently conduct similar analyses using only minimal observational data.”;
[0039] further define worker (actor) “As used herein the term actor can include actors, workers, employees, operators, assemblers, contractors, associates, managers, users, entities, humans, cobots, robots, and the like as well as combinations of them. As used herein the term robot can include a machine, device, apparatus or the like, especially one programmable by a computer, capable of carrying out a series of actions automatically. The actions can be autonomous, semi-autonomous, assisted, or the like. As used herein the term cobot can include a robot intended to interact with humans in a shared workspace. As used herein the term package can include packages, packets, bundles, boxes, containers, cases, cartons, kits, and the like.”;
[0109] further define task requirement category as two categories of human and robot “According to some embodiments, actors include both human workers and robots working side-by-side. It is appreciated that robots do not tire as humans do, the actions of robots are more repeatable than humans, and robots are unable to perform some tasks that humans can perform.”;
[0099] further define task requirement categories of human and robot to include temperature “In one embodiment, an entity (e.g., a human, robot, target object, etc.) can have a first particular temperature range and the station environment can have a second particular temperature range.”;
Fig. 16 and [0119] “The job assignment output 1600 is generated using a computer-implemented job assignment method as described herein according to embodiments of the present invention. The output 1600 includes a list of associates 1605 assigned to station assignment 1610. The list of associates 1605 further includes actor skill levels indicating a good fit, an average fit, a bad fit, or not enough data to determine a skill level. The actor skill level (e.g., associate skill level 1605, station assignment 1610, skill fit 1615, station fit 1620, and ergonomic fit 1625) may be determined according to one or more equations depicted in Table 6.”]; and
displaying an assignment of the task to be completed by the hybrid workforce on a device associated with the environment, wherein the assignment includes a classification of the task and an association with the hybrid workforce based on data [see at least Fig. 1 and [0047] “The one or more interfaces 135-165 can also include but not limited to one or more displays … For example, the one or more front-end units 190 can output one or more graphical user interfaces to present training content, work charts, real time alerts, feedback and or the like on one or more interfaces 165, such displays at one or more stations 120-130, at management portals on tablet PCs, administrator portals as desktop PCs or the like.”;
[0006] tasks are classified and then assigned to workers (actors) based on classification (categories) including task requirements “Embodiments of the present invention provide a deep and continuous data set including process data, quality data, specific actor data, and ergonomic data (among others) to automatically determine job assignments that maximize efficiency, quality and actor safety. Using the data set, tasks may be assigned to actors based on objective statistical data such as skills, task requirements, ergonomics and time availability. Assigning tasks in this way can provide unique value for manufacturers who currently conduct similar analyses using only minimal observational data.”;
Fig. 16 and [0119] “The job assignment output 1600 is generated using a computer-implemented job assignment method as described herein according to embodiments of the present invention. The output 1600 includes a list of associates 1605 assigned to station assignment 1610.”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Nath with Akela to include the limitation(s) above as disclosed by Akela. Nath (abstract and [0001-0002] ) teaches matching worker(s) to task(s) based on an a diverse set of factors and Akela improves this by expanding how to use the diverse set of factors such as by user or user type constraints [see at least Akela [0003-0005, 0006, 0099, 0109].
Furthermore, all of the claimed elements were known in the prior arts of a) Nath and b) Akela and c) one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded predictable results to one of ordinary skill in the art before the effective filing date of the claimed invention.
Nath in view of Akela doesn’t/don’t explicitly teach but Gienger discloses
creating a digital twin instance for each worker [see at least Fig. 1 and [0087-0088] “FIG. 1 depicts in the left portion a bi-manual robot 2, which manipulates a large object 3 using two effectors 8. The right portion of FIG. 1 depicts an alternate embodiment of a virtual character in front and side view. The virtual character may be a person 2′, e.g. a worker manipulating the physical object 3 with his two arms 8′. The description of the virtual character in case of the bi-manual robot 2 and of the virtual human worker 2′ with regard to manipulating the virtual object 3 in the virtual environment correspond to each other. The following description of a preferred embodiment uses the bi-manual robot 2 as an example for sake of conciseness, without intending a restriction of the simulation method and the simulation system 1 to the bi-manual robot 2, or humanoid robots generally.”;
[0066, 0228] “The method may include performing the method for solving a predetermined task with each of at least two different virtual characters. The method proceeds by performing a step of determining which of the at least two different virtual characters is more suitable by comparing quality criteria for performing the task by each of the at least two different virtual characters.”];
simulating the task using the digital twin instance for each respective worker using a physics-based or kinematic simulation executed by a hybrid workforce optimization module {simulating the task using the digital twin instance for each respective worker to generate a simulation-based performance outcome for that worker – claim 1} [see at least [0094] “The simulation system 1 computes a posture of the robot 2 for each of the sequence of steps and adds all postures into an overall kinematic model. The simulation system 1 analyses the sequence of postures for contact changes and object motions. In particular, the robotic system 1 applies algorithms “connect contacts” and “connect objects” to the kinematic model.”;
[0088] “The following description of a preferred embodiment uses the bi-manual robot 2 as an example for sake of conciseness, without intending a restriction of the simulation method and the simulation system 1 to the bi-manual robot 2, or humanoid robots generally.”];
updating a capability of each worker to perform the task based on results of the simulation including updating at least one task-specific metric generated by the physics-based or kinematic simulation {updating an individualized, task-specific capability of each worker to perform the task based on the corresponding simulation-based performance outcome – claim 1};
the updated, simulation-derived capability;
the simulation-enhanced capability determination [for the limitations above, see at least [0124] “The object tracking device 6 of the simulation system 1 may acquire sensor data for updating the object pose and the task objective. The updated task objective and the updated object pose ate then used to update the task definition and to perform motion planning according to step S8 using the updated task definition. The closed loop of the flowchart of FIG. 4 implements an online adaptation system. The online adaptation structure with steps S3-S4-S5-S6-S7-S8-S9-S3 according to FIG. 4 is a particularly advantageous structure for performing tasks in collaboration with a human, as the task definition may change due to unpredicted actions of the collaborating human or new instructions provided by the collaborating human.”;
[0110] “The task definition from steps S1 and S2 provides the basis for the step S3 of performing motion planning. In step S3, the simulation system 1 executes a motion planning algorithm on the task definition in order to generate a sequence of steps. The steps include a sequence of postures of the simulation system 1, in particular a sequence of postures of the effectors 8 of the robotic system 1 and a sequence of object poses to arrive at fulfilling the determined task objective, starting at the initial object pose. The motion planning algorithm applied in step S3 may be one of a plurality of known planning and motion generating algorithms available and discussed in literature in order to generate the sequence of postures provided by the step of motion planning. The robotic system 1 computes a posture of the robot 2, and in particular the effectors 8 of the robot 2 for each step of the sequence of postures. The computed postures are added to a kinematic model of the task.”;
[0088] “The following description of a preferred embodiment uses the bi-manual robot 2 as an example for sake of conciseness, without intending a restriction of the simulation method and the simulation system 1 to the bi-manual robot 2, or humanoid robots generally.”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Nath in view of Akela with Gienger to include the limitation(s) above as disclosed by Gienger. Nath in view of Akela (Nath abstract and [0001-0002] ) teaches matching worker(s) to task(s) based on an a diverse set of factors and Gienger improves this by expanding how to use the diverse set of factors such as via simulation [see at least Gienger [0015-0017].
Furthermore, all of the claimed elements were known in the prior arts of a) Nath in view of Akela and b) Gienger and c) one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded predictable results to one of ordinary skill in the art before the effective filing date of the claimed invention.
Regarding claim 2, 9, and 16, modified Nath teaches the computer-implemented method of claim 1,
and Nath teaches further comprising transmitting an instruction set to the robotic worker associated with the task to be completed by the hybrid workforce [see at least [0057] “Context-Aware Crowdsourced Task Optimizer receives inputs including historical, real-time and future context information of multiple workers, and task properties (such as location, payment, deadline, etc.) from one or more task publishers. Given these inputs, the Context-Aware Crowdsourced Task Optimizer outputs recommended assignments of bundles of tasks to active workers. In making these recommendations, the Context-Aware Crowdsourced Task Optimizer uses learned worker models to make recommendations in a way that meets various criteria, including maximizing task completion rates where task payments are fixed, jointly maximizing task completion rates while minimizing task payments using adaptive pricing, etc.”;
[0055] “Another example of actively notifying workers is that in various embodiments, the Context-Aware Crowdsourced Task Optimizer pushes recommendations of tasks to workers that are near tasks that can be accepted, or to remind the worker to complete tasks depending upon time or current location of worker. In other words, when the worker is near (or heading towards) a task location, the system can remind the worker to perform a previously assigned task. For example, the Context-Aware Crowdsourced Task Optimizer can alert the worker or send a message to a computing device (e.g., a cell phone) of the worker, such as, for example, “Based on your typical daily commute (or current travel route), you will be passing near Restaurant X. If you stop in and take a picture of the menu we will pay you a $5 for completion of that task.” In other words, the Context-Aware Crowdsourced Task Optimizer tries to learn known present locations, travel routes, anticipated future locations, etc., for workers, and then to use this and other information to recommend bundles of one or more tasks to the workers based on the location of those tasks relative to the location of the worker.”].
Regarding claim 3, 10, and 17, modified Nath teaches the computer-implemented method of claim 1, .
Modified Nath doesn’t/don’t explicitly teach but Akela discloses further comprising:
associating the task with a human worker in the plurality of human workers, wherein the human worker has the highest capability when the task is suitable for being completed by the human [see at least [0006] tasks are classified and then assigned to workers (actors) based on classification (categories) including task requirements “Embodiments of the present invention provide a deep and continuous data set including process data, quality data, specific actor data, and ergonomic data (among others) to automatically determine job assignments that maximize efficiency, quality and actor safety. Using the data set, tasks may be assigned to actors based on objective statistical data such as skills, task requirements, ergonomics and time availability. Assigning tasks in this way can provide unique value for manufacturers who currently conduct similar analyses using only minimal observational data.”;
[0039] further define worker (actor) “As used herein the term actor can include actors, workers, employees, operators, assemblers, contractors, associates, managers, users, entities, humans, cobots, robots, and the like as well as combinations of them. As used herein the term robot can include a machine, device, apparatus or the like, especially one programmable by a computer, capable of carrying out a series of actions automatically. The actions can be autonomous, semi-autonomous, assisted, or the like. As used herein the term cobot can include a robot intended to interact with humans in a shared workspace. As used herein the term package can include packages, packets, bundles, boxes, containers, cases, cartons, kits, and the like.”;
[0109] further define task requirement category two categories of human and robot “According to some embodiments, actors include both human workers and robots working side-by-side. It is appreciated that robots do not tire as humans do, the actions of robots are more repeatable than humans, and robots are unable to perform some tasks that humans can perform.”;
[0099] further define task requirement categories of human and robot to include temperature “In one embodiment, an entity (e.g., a human, robot, target object, etc.) can have a first particular temperature range and the station environment can have a second particular temperature range.”;
Fig. 16 and [0119] “The job assignment output 1600 is generated using a computer-implemented job assignment method as described herein according to embodiments of the present invention. The output 1600 includes a list of associates 1605 assigned to station assignment 1610. The list of associates 1605 further includes actor skill levels indicating a good fit, an average fit, a bad fit, or not enough data to determine a skill level. The actor skill level (e.g., associate skill level 1605, station assignment 1610, skill fit 1615, station fit 1620, and ergonomic fit 1625) may be determined according to one or more equations depicted in Table 6.”]; and
transmitting a notification to a second device associated with the human worker [see at least Fig. 1 and “The one or more interfaces 135-165 can also include but not limited to one or more displays, touch screens, touch pads, keyboards, pointing devices, button, switches, control panels, actuators, indicator lights, speakers, Augmented Reality (AR) interfaces, Virtual Reality (VR) interfaces, desktop Personal Computers (PCs), laptop PCs, tablet PCs, smart phones, robot interfaces, cobot interfaces. The one or more interfaces 135-165 can be configured to receive inputs from one or more actors 120-130, one or more engines 170 or other entities. Similarly, the one or more interfaces 135-165 can be configured to output to one or more actors 120-130, one or more engine 170 or other entities. For example, the one or more front-end units 190 can output one or more graphical user interfaces to present training content, work charts, real time alerts, feedback and or the like”;
[0006] tasks are classified and then assigned to workers (actors) based on classification (categories) including task requirements “Embodiments of the present invention provide a deep and continuous data set including process data, quality data, specific actor data, and ergonomic data (among others) to automatically determine job assignments that maximize efficiency, quality and actor safety. Using the data set, tasks may be assigned to actors based on objective statistical data such as skills, task requirements, ergonomics and time availability. Assigning tasks in this way can provide unique value for manufacturers who currently conduct similar analyses using only minimal observational data.”;
Fig. 16 and [0119] “The job assignment output 1600 is generated using a computer-implemented job assignment method as described herein according to embodiments of the present invention. The output 1600 includes a list of associates 1605 assigned to station assignment 1610.”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify modified Nath with Akela to include the limitation(s) above as disclosed by Akela. Modified Nath (Nath abstract and [0001-0002] ) teaches matching worker(s) to task(s) based on an a diverse set of factors and Akela improves this by expanding how to use the diverse set of factors such as by user or user type constraints [see at least Akela [0003-0005, 0006, 0099, 0109].
Furthermore, all of the claimed elements were known in the prior arts of a) modified Nath and b) Akela and c) one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded predictable results to one of ordinary skill in the art before the effective filing date of the claimed invention.
Regarding claim 4, 11, and 18, modified Nath teaches the computer-implemented method of claim 1,
and Nath teaches capturing task data from the environment, wherein the task data is selected from a group consisting of data and data; and
identifying the task to be completed by the hybrid workforce in the task data [for the limitations above, see at least [0029] “In general, the aforementioned learned worker models 135 are generated by a worker model generation module 180 that uses any of a wide variety of machine learning techniques to generate machine-learned models for each worker (or worker group). The worker model generation module 180 also update the learned worker models 135 over time when additional observations (e.g., task completions, worker history, etc.) become available.”;
[0030] “A context update module 185 is used to evaluate sensor data from workers' devices, and/or user input to determine and update the current and future worker contexts (165 and 170, respectively) whenever additional worker context information becomes available”].
Modified Nath doesn’t/don’t explicitly teach but Akela discloses further comprising:
capturing task data from the environment, wherein the task data is selected from a group consisting of video data and audio data [see at least [0006] tasks are classified and then assigned to workers (actors) based on classification (categories) including task requirements “Embodiments of the present invention provide a deep and continuous data set including process data, quality data, specific actor data, and ergonomic data (among others) to automatically determine job assignments that maximize efficiency, quality and actor safety. Using the data set, tasks may be assigned to actors based on objective statistical data such as skills, task requirements, ergonomics and time availability. Assigning tasks in this way can provide unique value for manufacturers who currently conduct similar analyses using only minimal observational data.”;
[0018, 0112] “The sensors 1115-1125 may be configured to continuously monitor the activities of actors 1145-1155, and the data captured by the sensors 1115-1125 can be described according to a distribution function to reflect variations in performance or steps of a process. For example, the sensor data may be provided in a sensor stream including video frames, thermal sensor data, force sensor data, audio sensor data, and/or light sensor data. In this way, embodiments of the present invention are able to apply relevant mathematical programming techniques (e.g., parallel representations or multi-stage optimization techniques) to efficiently assign actors to specific actions. For example, an actor's performance (e.g., actors 1145-1155) may be tracked over time using sensors 1115-1125 to determine/characterize the actor's skill level, the time spent at various stations, the availability of the actor, and/or the actor's physical/ergonomic ability, and mathematical programming techniques may be applied to the sensor data to efficiently assign the actor to an action.”;
[0032] “FIG. 12 show a flow chart depicting an exemplary sequence of computer implemented steps for automatically observing and analyzing actor activity in real-time in accordance with various embodiments of the present disclosure.”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify modified Nath with Akela to include the limitation(s) above as disclosed by Akela. Modified Nath (Nath abstract and [0001-0002] ) teaches matching worker(s) to task(s) based on an a diverse set of factors and Akela improves this by expanding how to use the diverse set of factors such as by user or user type constraints [see at least Akela [0003-0005, 0006, 0099, 0109].
Furthermore, all of the claimed elements were known in the prior arts of a) modified Nath and b) Akela and c) one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded predictable results to one of ordinary skill in the art before the effective filing date of the claimed invention.
Akella also teaches identifying the task to be completed by the hybrid workforce in the task data [for the same reasons as the limitation cited above].
Regarding claim 5, 12, and 19, modified Nath teaches the computer-implemented method of claim 1, as well as
for each robot worker in the plurality of robot workers and for each human worker in the plurality of human workers (determining a capability to perform the task to be completed by the hybrid workforce for each human worker and for each robotic worker from the profile – claim 1) and
the hybrid workforce.
Modified Nath doesn’t/don’t explicitly teach but Gienger discloses
further comprising:
creating a digital twin instance for each worker of workers [see at least Fig. 1 and [0087-0088] “FIG. 1 depicts in the left portion a bi-manual robot 2, which manipulates a large object 3 using two effectors 8. The right portion of FIG. 1 depicts an alternate embodiment of a virtual character in front and side view. The virtual character may be a person 2′, e.g. a worker manipulating the physical object 3 with his two arms 8′. The description of the virtual character in case of the bi-manual robot 2 and of the virtual human worker 2′ with regard to manipulating the virtual object 3 in the virtual environment correspond to each other. The following description of a preferred embodiment uses the bi-manual robot 2 as an example for sake of conciseness, without intending a restriction of the simulation method and the simulation system 1 to the bi-manual robot 2, or humanoid robots generally.”;
[0066, 0228] “The method may include performing the method for solving a predetermined task with each of at least two different virtual characters. The method proceeds by performing a step of determining which of the at least two different virtual characters is more suitable by comparing quality criteria for performing the task by each of the at least two different virtual characters.”];
simulating the task using the digital twin instance [see at least [0094] “The simulation system 1 computes a posture of the robot 2 for each of the sequence of steps and adds all postures into an overall kinematic model. The simulation system 1 analyses the sequence of postures for contact changes and object motions. In particular, the robotic system 1 applies algorithms “connect contacts” and “connect objects” to the kinematic model.”;
[0088] “The following description of a preferred embodiment uses the bi-manual robot 2 as an example for sake of conciseness, without intending a restriction of the simulation method and the simulation system 1 to the bi-manual robot 2, or humanoid robots generally.”]; and
updating the capability to perform the task to be completed by the workforce based on a digital twin simulation output [for the limitations above, see at least [0124] “The object tracking device 6 of the simulation system 1 may acquire sensor data for updating the object pose and the task objective. The updated task objective and the updated object pose ate then used to update the task definition and to perform motion planning according to step S8 using the updated task definition. The closed loop of the flowchart of FIG. 4 implements an online adaptation system. The online adaptation structure with steps S3-S4-S5-S6-S7-S8-S9-S3 according to FIG. 4 is a particularly advantageous structure for performing tasks in collaboration with a human, as the task definition may change due to unpredicted actions of the collaborating human or new instructions provided by the collaborating human.”;
[0110] “The task definition from steps S1 and S2 provides the basis for the step S3 of performing motion planning. In step S3, the simulation system 1 executes a motion planning algorithm on the task definition in order to generate a sequence of steps. The steps include a sequence of postures of the simulation system 1, in particular a sequence of postures of the effectors 8 of the robotic system 1 and a sequence of object poses to arrive at fulfilling the determined task objective, starting at the initial object pose. The motion planning algorithm applied in step S3 may be one of a plurality of known planning and motion generating algorithms available and discussed in literature in order to generate the sequence of postures provided by the step of motion planning. The robotic system 1 computes a posture of the robot 2, and in particular the effectors 8 of the robot 2 for each step of the sequence of postures. The computed postures are added to a kinematic model of the task.”;
[0088] “The following description of a preferred embodiment uses the bi-manual robot 2 as an example for sake of conciseness, without intending a restriction of the simulation method and the simulation system 1 to the bi-manual robot 2, or humanoid robots generally.”].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify modified Nath with Gienger to include the limitation(s) above as disclosed by Gienger. Modified Nath (Nath abstract and [0001-0002] ) teaches matching worker(s) to task(s) based on an a diverse set of factors and Gienger improves this by expanding how to use the diverse set of factors such as via simulation [see at least Gienger [0015-0017].
Furthermore, all of the claimed elements were known in the prior arts of a) modified Nath and b) Gienger and c) one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded predictable results to one of ordinary skill in the art before the effective filing date of the claimed invention.
Regarding claim 6, 13, and 20, modified Nath teaches the computer-implemented method of claim 1,
and Nath teaches further comprising: monitoring user interactions with the assignment of the task to be completed by the hybrid workforce;
and updating the classification of the task and the association with the hybrid workforce based on the user interactions [see at least [0029] “In general, the aforementioned learned worker models 135 are generated by a worker model generation module 180 that uses any of a wide variety of machine learning techniques to generate machine-learned models for each worker (or worker group). The worker model generation module 180 also update the learned worker models 135 over time when additional observations (e.g., task completions, worker history, etc.) become available.”;
[0030] “A context update module 185 is used to evaluate sensor data from workers' devices, and/or user input to determine and update the current and future worker contexts (165 and 170, respectively) whenever additional worker context information becomes available”].
Regarding claim 7 and 14, modified Nath teaches the computer-implemented method of claim 1,
and Nath teaches wherein a machine learning model that predicts suitability of a worker to a specific task from profile information about the worker is used to classify an identified task [see at least [0029] “In general, the aforementioned learned worker models 135 are generated by a worker model generation module 180 that uses any of a wide variety of machine learning techniques to generate machine-learned models for each worker (or worker group). The worker model generation module 180 also update the learned worker models 135 over time when additional observations (e.g., task completions, worker history, etc.) become available.”;
[0030] “A context update module 185 is used to evaluate sensor data from workers' devices, and/or user input to determine and update the current and future worker contexts (165 and 170, respectively) whenever additional worker context information becomes available”].
Conclusion
When responding to the office action, any new claims and/or limitations should be accompanied by a reference as to where the new claims and/or limitations are supported in the original disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES WEBB whose telephone number is (313)446-6615. The examiner can normally be reached on M-F 10-3.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O’Connor can be reached on (571) 272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES WEBB/Examiner, Art Unit 3624