DETAILED CORRESPONDENCE
This non-final office action is in response to the Amendments filed on 06 January 2026, regarding application number 18/452,598.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 29 January 2026 has been entered.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Response to Amendment
Claims 1-20 remain pending in the application. Claims 1-4, 6-8, 10-11, 13-17 and 19-20 were amended in the Amendments to the Claims.
Applicant’s amendments to the claims have overcome the objections previously set forth in the final office action mailed 06 November 2025. Therefore, the objections have been withdrawn. However, new objections remain outstanding as a result of the amended claims.
Applicant’s amendments to the claims have overcome some of, but not all of, the 35 U.S.C. 112(b) rejections previously set forth in the final office action mailed 06 November 2025. See full details below.
Response to Arguments
Applicant’s arguments, see Pages 11-12, with respect to the rejections of claims 1-20 under 35 U.S.C. 112(a) have been fully considered but are not persuasive because the amended claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor at the time the application was filed, had possession of the claimed invention. Accordingly, the rejections under 35 U.S.C. 112(a) have been maintained. See full details below.
Applicant’s arguments, see Page 12, with respect to the rejections of claims 1-20 under 35 U.S.C. 112(b) have been fully considered. Therefore some of the rejections have been withdrawn; however, others remain outstanding because they were not addressed by the amendments. Furthermore, new rejections main outstanding as a result of the amended claims. See full details below.
Applicant’s arguments, see Pages 12-15, with respect to the rejections of claims 1-20 under 35 U.S.C. § 103 have been fully considered but they are not persuasive. Applicant has argued the following:
“Applicant respectfully asserts that claims 1, 8 and 15, as previously amended, are patentable over MORI and in view of LU. None of the cited reference teach or suggest, amended claim 1 (claims 8 and 15 has been amended with similar language to claim 1), "...analyzing the data from the BMI device associated with the user[[s]], wherein the data comprises brain signals relating to activities performed by the user;;" and "analyzing the real- time digital twin simulation based on the activities of the user[[s]] and robotic counterpart, wherein the analyzing comprises of, identifying skills, capabilities of user and the robotic counterpart, predicting whether the user can complete the activities based on a work criteria;"(emphasis added).
For at least these reasons, MORI nor LU does not disclose all the features of the independent claim 1. Each of Applicant's claims 2-7, 9-14 and 17-20 indirectly import the features/limitations described above based on their dependence from claim 1 (similarly with independent claims 8 and 15).
Based on the foregoing, Applicant respectfully request the rejection based on 35 U.S.C. §103 be withdrawn.”
Examiner respectfully disagrees because Mori, in view of Lu, explicitly teaches at least the claimed:
analyzing the data from the BMI device associated with the user, wherein the data comprises brain signals relating to activities performed by the user (see Mori at Fig. 5A, step S102; [0012 "...a skill-level calculation unit configured to calculate the skill level of the target operator with respect to the work step by analyzing the movement data acquired..."], [0015 "Further, for example, a sensor may be used as a camera, a motion capture, a load cell, an electroencephalograph (EEG), a magnetoencephalography (MEG), a magnetic resonance imaging (MRI) configured to capture a blood flow associated with a brain activity using the functional magnetic resonance imaging (fMRI), a brain activity measuring device configured to measure a brain blood flow..."]-[0016], [0100], [0164 "Hereby, the movement data may be acquired by observing, for example, the movement of the body, a brain wave, a brain blood flow, a pupil diameter, a gaze direction, an electrocardiogram, an electromyogram, a galvanic skin reflex, and the like."] and [0181]); and
analyzing the real-time digital twin simulation based on the activities of the user and robotic counterpart, wherein the analyzing comprises of, identifying skills, capabilities of user and the robotic counterpart, predicting whether the user can complete the activities based on a work criteria (
see Mori at Fig. 5A, step S103; [0012 "...a second acquisition unit configured to acquire the movement data of a model operator at a skill level slightly higher than or equal to the skill level calculated for the target operator by accessing a database that stores the movement data of the model operator for each skill level, the movement data acquired throughout the process of a model operator achieving the skill level at which the model operator can suitably accomplish the work step; an instruction determination unit configured to compare the movement data acquired for the model operator..."], [0023]-[0024 "The database in the work support device according to one or more aspects may store a plurality of pieces of movement data each corresponding to a skilled operator among a plurality of skilled operators capable of suitably accomplishing the work step as the movement data of the model operator. "] and [0101]-[0105];
see Lu at Fig. 6, Motion Module and Capability Module; Fig. 7, all; Page 69, Section C., Step 3) "The sequence of operations is simulated and evaluated through the DT models to create an initial task allocation." and Step 4) "In this step, a set of criteria such as the cycle time, the cost and the ergonomics of various task sequences are simulated through the DT framework to produce a set of resource assignments, which include the selections of robots, fixture, grippers and tools. And meanwhile, in order to maximise the abilities of both the operator(s) and robot(s), an optimal workstation layout design will be developed to illustrate the places of various resources, components and equipment to be located."; Pages. 70-71, all, especially Section 4)a) "The third unit is the capability unit. In this unit, the human abilities and skills, as well as physical characteristics of human, are assessed. The operator's psychological well-being and satisfaction must be fulfilled when developing an HRC system. In HRC systems, ergonomics can be assessed through physical and cognitive aspects." and Section 4)c) "The second unit is the degrees of collaboration unit, which defines the degree of task intersection and dependency between the operator and the cobot. This unit draws a clear line between the various required capabilities of cobots in different industrial scenarios."; and
additionally, see the rejections of the above claim limitations under 35 U.S.C. 112(a) and 35 U.S.C. 112(b) below.).
With respect to Applicant’s arguments regarding the remaining dependent claims 3 and 10; 6, 13 and 19; and 7, 14 and 20, Examiner respectfully disagrees for at least the same reasons discussed above because Mori, in view of Lu, teaches at least each and every limitation of independent claims 1, 8 and 15.
For the sake of compact prosecution, a new ground(s) of rejection is made further in view of newly cited reference Timmons et al. (US 11556879 B1). See full rejection details below.
Claim Objections
Claims 1-2, 8 and 15 are objected to because of the following informalities:
Regarding claims 1, 8 and 15, the claims contain the following grammatical errors:
The preambles of the claims should state "managing activities of a user" rather than "managing activities of user".
The succeeding limitations should state "wherein the user is
The succeeding limitations should state "capabilities of the user" rather than "capabilities of user".
Regarding claim 2, the claim should state "comprising a skin response" rather than "comprising of skin response".
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding Claims 1-20
Claims 1, 8 and 15 recite “analyzing the data from the BMI device associated with the user, wherein the data comprises brain signals relating to activities performed by the user;”. The claims define the invention in functional language specifying a desired result but the disclosure fails to sufficiently identify how the function is performed or the result is achieved. For example, how is the brain signal data related to activities performed by the user and how is it analyzed? It is additionally unclear whether or not the claimed “activities” in the above limitation is referring to the same or different activities recited in the preamble and in the limitations below it. See the 35 U.S.C. 112(b) rejection below. Paragraph [0106] of Applicant’s spec, filed on 21 August 2023, states the following: "The proposed system will be having a historically created knowledge corpus about the quality of the activities and the brain wave signals from the workers while performing the activities", without providing any examples of relating the brain wave signal data to the activities nor how it is analyzed. Additionally, claim 3 states that “determining one or more skillsets and capabilities of the user and robotic counterpart to perform the activities based on the physiological data” but is unclear how BMI data is interpreted to determine the skillsets and capabilities of the user and robotic counterpart. As such, the claims lack written description because the claims define the invention using functional language that specifies desired results but the written description fails to identify how the results are achieved. See MPEP 2163.03(V).
Claims 1, 8 and 15 recite “analyzing the real-time digital twin simulation based on the activities of the user and robotic counterpart, wherein the analyzing comprises of, identifying skills, capabilities of user and the robotic counterpart, predicting whether the user can complete the activities based on a work criteria;”. The claims define the invention in functional language specifying a desired result but the disclosure fails to sufficiently identify how the function is performed or the result is achieved. The claim limitation is unclear, see the 35 U.S.C. 112(b) rejections below, and therefore it is additionally unclear how the function is performed. For example, how is the real-time digital twin simulation analyzed with respect to skills, capabilities or completion of the activities? Paragraph [0093] of Applicant’s spec mentions the digital twin simulation analysis without providing any details on how it is performed. As such, the claims lack written description because the claims define the invention using functional language that specifies desired results but the written.
Claims 1, 8 and 15 recite “comparing the data of the user against the robotic counterpart based on the work criteria;”. The claims define the invention in functional language specifying a desired result but the disclosure fails to sufficiently identify how the function is performed or the result is achieved. For example, how is the data compared against the robotic counterpart? The claim limitation is unclear, see the 35 U.S.C. 112(b) rejections below, and therefore it is additionally unclear how the function is performed. Paragraph [0095] of Applicant’s spec states the following: "Activity component 111 compares data against digital twin simulation (step 412). In an embodiment, activity component 111, compares the brain waves of human workers against the robotic workers from the digital twin simulation.". This paragraph merely recites that the brain waves are compared against robotic workers from the robotic digital simulation without providing any detail on how the comparison is executed. The claims are also inconsistent with this part of the specification because the claims require comparison against the robotic counterpart, while the specification requires comparison against robotic workers from the digital twin simulation . Claims 4 and 6 provide examples of comparison, such as determining whether the user can complete the activities within an allotted timeframe, within an allotted cost and without sacrificing quality and safety. However, the Specification does not provide details on how determining completion of activities without an allotted timeframe, within an allotted cost and without sacrificing quality and safety is performed. As such, the claims lack written description because the claims define the invention using functional language that specifies desired results but the written description fails to identify how the results are achieved.
Claims 1, 8 and 15 recite “determining whether quality of work performed by the user falls under a predetermined threshold of the work criteria;”. The claims define the invention in functional language specifying a desired result but the disclosure fails to sufficiently identify how the function is performed or the result is achieved. For example, how is determined whether quality of work performed by the user falls under a predetermined threshold achieved? It is unclear what the threshold consists of. Paragraph [0078] of Applicant’s spec states the following: “One embodiment of activity component 111 can collect brain waves of the human worker to determine if any part of the activity should be reallocated to robotic should the quality of work performed by human worker falls below a certain threshold.". This paragraph does not provide details on how quality falling below the threshold is determined. Instead, it generally recites the claimed comparison step. As such, the claims lack written description because the claims define the invention using functional language that specifies desired results but the written description fails to identify how the results are achieved.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding Claims 1-20
Claims 1, 8 and 15 state the following limitation, "analyzing the data from the BMI device associated with the user, wherein the data comprises brain signals relating to activities performed by the user;". It is unclear whether or not the "activities" is referring to the same or different activities recited in the preamble and in the limitations below it. As such, the claims are indefinite because the metes and bounds of the claim are unclear. The remaining claims are additionally rejected by virtue of dependency on claims 1, 8 and 15. For the purpose of compact prosecution, the "activities" will be interpreted as the same activities mentioned elsewhere in the claim.
Claims 1, 8 and 15 state the following limitation, "wherein the analyzing comprises of, identifying skills, capabilities of user and the robotic counterpart, predicting whether the user can complete the activities based on a work criteria;". It is unclear whether or not the claims require at least one of or all of the elements listed because the claims state "comprises of" rather than "comprises at least one of:" or "comprises:" and there is no recited "and" or "or" to distinguish the claim elements. Additionally, "identifying skills, capabilities of user and the robotic counterpart" is unclear. As such, the claims are indefinite because the metes and bounds of the claim are unclear. The remaining claims are additionally rejected by virtue of dependency on claims 1, 8 and 15. For the purpose of compact prosecution, the above limitation will be interpreted as "wherein the analyzing comprises at least one of: [[,]] identifying skills, identifying capabilities of the user and the robotic counterpart, or predicting whether the user can complete the activities based on a work criteria;".
Claims 1, 8 and 15 the following limitation, "comparing the data of the user against the robotic counterpart based on the work criteria;". There is insufficient antecedent basis for "the data of the user". The claim previously recites "the data" but not "the data of the user", therefore it is unclear whether or not the elements are referring to the same thing. As such, the claims are indefinite because the metes and bounds of the claim are unclear. The remaining claims are additionally rejected by virtue of dependency on claims 1, 8 and 15. For the purpose of compact prosecution, "the data of the user" will be interpreted as "the data".
Additionally, "comparing the data of the user against the robotic counterpart based on the work criteria;" is unclear. It is unclear how the data of the user is compared against the robotic counterpart. For example, how is data associated with the user compared to the robotic counterpart? The claims do not describe any data associated with the robotic counterpart to establish a basis of how the data is compared. As such, the claims are indefinite because the metes and bounds of the claim are unclear. The remaining claims are additionally rejected by virtue of dependency on claims 1, 8 and 15. For the purpose of compact prosecution, the above limitation will be interpreted as "comparing the data the digital twin simulation based on the work criteria;".
In claims 3-4, 11, 15 and 17, some claim limitations recite a plurality of "users", while the remaining claim limitations recite a singular "user". See claim 3, "communicating to the users via augmented reality"; claim 4, "reassigning users to another activity"; claim 11, "reassigning users to another activity"; claim 15, "robotic counterpart of the users"; and claim 17, "reassigning users to another activity". Therefore, it is unclear whether the claims as a whole require one user or a plurality of users. As such, the claims are indefinite because the metes and bounds of the claims are unclear. Claims 16-20 are additionally rejected by virtue of dependency on claim 15. For the purpose of compact prosecution, each and every recitation of "user" and "users" will be interpreted as "user".
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-2, 5, 8-9, 12, 15 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Mori et al. (US 20200074380 A1 and Mori hereinafter), in view of NPL - Lu et al. "A generic and modularized Digital twin enabled human-robot collaboration" (Lu hereinafter) and Timmons et al. (US 11556879 B1 and Timmons hereinafter).
Regarding Claims 1, 8 and 15
Regarding claim 1, Mori teaches a computer-implemented method for managing activities of user and robotic counterpart of the user (see all Figs.; [0002] and [0012]), the computer-implemented method comprising:
receiving data from a BMI (brain machine interface) device, wherein the user are equipped with the BMI device (see Fig. 5A, step S101; [0012 "...a first acquisition unit configured to acquire movement data generated by using one or a plurality of sensors to measure the movement of a target operator performing a work step..."], [0015 "For example, the movement data may be acquired by observing … a galvanic skin reflex (GSR) and so forth. Further, for example, a sensor may be used as a camera, a motion capture, a load cell, an electroencephalograph (EEG), a magnetoencephalography (MEG), a magnetic resonance imaging (MRI) configured to capture a blood flow associated with a brain activity using the functional magnetic resonance imaging (fMRI), a brain activity measuring device configured to measure a brain blood flow..."], [0086], [0164 "Hereby, the movement data may be acquired by observing, for example, the movement of the body, a brain wave, a brain blood flow, a pupil diameter, a gaze direction, an electrocardiogram, an electromyogram, a galvanic skin reflex, and the like."] and [0181]);
analyzing the data from the BMI device associated with the user, wherein the data comprises brain signals relating to activities performed by the user (see Fig. 5A, step S102; [0012 "...a skill-level calculation unit configured to calculate the skill level of the target operator with respect to the work step by analyzing the movement data acquired..."], [0015 "Further, for example, a sensor may be used as a camera, a motion capture, a load cell, an electroencephalograph (EEG), a magnetoencephalography (MEG), a magnetic resonance imaging (MRI) configured to capture a blood flow associated with a brain activity using the functional magnetic resonance imaging (fMRI), a brain activity measuring device configured to measure a brain blood flow..."]-[0016], [0100], [0164 "Hereby, the movement data may be acquired by observing, for example, the movement of the body, a brain wave, a brain blood flow, a pupil diameter, a gaze direction, an electrocardiogram, an electromyogram, a galvanic skin reflex, and the like."] and [0181]);
determining the activities of the user (see [0012 "...the skill level indicating a degree on a spectrum of whether or not the target operator can suitably accomplish the work step...the movement data acquired throughout the process of a model operator achieving the skill level at which the model operator can suitably accomplish the work step..."], [0016] and [0087]-[0100]);
creating a digital twin simulation of the user based on the activities (see Fig. 5A, step S103; [0012 "...a second acquisition unit configured to acquire the movement data of a model operator at a skill level slightly higher than or equal to the skill level calculated for the target operator by accessing a database that stores the movement data of the model operator for each skill level..."], [0023] and [0104]-[0105]);
analyzing the digital twin simulation based on the activities of the user, wherein the analyzing comprises of, identifying skills, capabilities of user, predicting whether the user can complete the activities based on a work criteria (see Fig. 5A, step S103; [0012 "...a second acquisition unit configured to acquire the movement data of a model operator at a skill level slightly higher than or equal to the skill level calculated for the target operator by accessing a database that stores the movement data of the model operator for each skill level, the movement data acquired throughout the process of a model operator achieving the skill level at which the model operator can suitably accomplish the work step; an instruction determination unit configured to compare the movement data acquired for the model operator..."], [0023]-[0024 "The database in the work support device according to one or more aspects may store a plurality of pieces of movement data each corresponding to a skilled operator among a plurality of skilled operators capable of suitably accomplishing the work step as the movement data of the model operator. "] and [0101]-[0105]);
collecting a first historical data of the activities of the user and a second historical data from the digital twin simulation (see Figs. 2 and 9A-10, all; [0012 "...a first acquisition unit configured to acquire movement data generated by using one or a plurality of sensors to measure the movement of a target operator performing a work step...a second acquisition unit configured to acquire the movement data of a model operator at a skill level slightly higher than or equal to the skill level calculated for the target operator by accessing a database that stores the movement data of the model operator for each skill level"], [0014], [0024 "The database in the work support device according to one or more aspects may store a plurality of pieces of movement data each corresponding to a skilled operator among a plurality of skilled operators capable of suitably accomplishing the work step as the movement data of the model operator."], [0062]-[0063] and [0112]-[0115]);
comparing the data of the user against the robotic counterpart based on the work criteria (see Fig. 5A, step S104; [0012 "...an instruction determination unit configured to compare the movement data acquired for the model operator with the movement data of the target operator..."] and [0129]-[0132]; this claim limitation has been rejected under of 35 U.S.C. 112(a) and of 35 U.S.C. 112(b), because it is unclear exactly what the data of the user is compared against and how it is compared. As such, Mori explicitly teaches the claim limitation under the interpretation that the data of the user is compared against the digital twin simulation.);
determining whether quality of work performed by the user falls under a predetermined threshold of the work criteria (see Fig. 5A, step S104; [0013], [0017 "The instruction determination unit may identify one or a plurality of feature amounts exhibiting a major difference between the target operator and the model operator on the basis of the results of comparison..."], [0062 "The skill level indicates a degree on a spectrum of whether or not the target operator can perform the work step 40 suitably. The expression “can suitably accomplish the work step 40” signifies the ability to complete the target work step 40 to at a standard quality within a standard time."]-[0064 "The work support device 1 according to one or more embodiments acquires movement data 70 of a model operator at a skill level slightly higher than or equal to the skill level calculated for the target operator 50 by accessing the learning process database 60."], [0100]-[0101] and [0132 "The feature amounts “having large differences” may refer to first through Nth feature amounts listed in order of the magnitude of difference (where N is an integer of 1 or more), or may refer to feature amounts having differences greater than a threshold."]);
in responsive to the quality of work performed by the user falls under the predetermined threshold, generating one or more solutions in order to improve the work criteria of the user (see Fig. 5A, steps S104-S105; [0012 "...determine an instruction that allows the movement of the target operator with respect to the work step to approach the movement of the model operator on the basis of the results of comparison..."], [0017]-[0018], [0062 "The skill level indicates a degree on a spectrum of whether or not the target operator can perform the work step 40 suitably. The expression “can suitably accomplish the work step 40” signifies the ability to complete the target work step 40 to at a standard quality within a standard time."]-[0064 "The work support device 1 according to one or more embodiments acquires movement data 70 of a model operator at a skill level slightly higher than or equal to the skill level calculated for the target operator 50 by accessing the learning process database 60."] and [0131 "Here in step S104, the control unit 11 compares the movement data 70 of a model operator and the movement data 55 of the target operator 50 for each basic operation. Then, in step S105, the control unit 11 determines an instruction for the target operator 50 with respect to at least one of the plurality of basic operations on the basis of the results of comparison."]-[0132]); and
executing the one or more solutions with new instructions to the user (see Figs. 5A-5B, steps S105-S106; [0012 "...determine an instruction that allows the movement of the target operator with respect to the work step to approach the movement of the model operator on the basis of the results of comparison; and an output unit configured to output information associated with the instruction determined."], [0017]-[0018], [0132] and [0138]).
Regarding claim 8, Mori additionally teaches a computer program product for managing activities of user and robotic counterpart of the user (see all Figs., especially Fig. 3; [0012] and [0067]-[0070]), the computer program product comprising:
one or more computer-readable non-transitory storage media having computer-readable program instructions stored on the one or more computer-readable non-transitory storage media (see Fig. 3, RAM and ROM; [0070]) said program instructions executes a computer-implemented method comprising the above steps (as discussed above).
Regarding claim 15, Mori additionally teaches a computer system for managing activities of user and robotic counterpart of the users (see all Figs., especially Fig. 3; [0012] and [0067]-[0070]), the computer system comprising:
one or more computer processors (see Fig. 3, CPU; [0070]); and
one or more computer readable non-transitory storage media having computer-readable program instructions stored on the one or more computer readable non-transitory storage media (see Fig. 3, RAM and ROM; [0070]), said program instructions executes, by the one or more computer processors, a computer-implemented method comprising the above steps (as discussed above).
Mori is silent regarding determining the activities of a robotic counterpart of the user;
creating a real-time digital twin simulation of the user and the robotic counterpart;
analyzing the real-time digital twin simulation based on the activities of the robotic counterpart; and
collecting the first historical data of the activities the robotic counterpart.
Lu teaches a computer-implemented method for managing activities of user and robotic counterpart of the user (see Fig. 1, all; Abstract, all), the computer-implemented method comprising:
receiving data from a device, wherein the user are equipped with the device (see Page 69, Section C., Steps 8)-9) "In this stage, all components are evaluated to assess the system's performance. A perception module is also employed to monitor the real-time robot execution, human behaviours and changes in the environment ... The updated sensor data and the system parameters such as human intention, robot status and process parameters, are uploaded to the DT system."; Page 69, Section D., Step 2) "The system has three groups of sensors, employed to measure the statuses of robots, operators and shop-floor environments, respectively.");
analyzing the data associated with the user (see Page 69, Section C., Step 8) "In this stage, all components are evaluated to assess the system's performance. A perception module is also employed to monitor the real-time robot execution, human behaviours and changes in the environment.");
determining the activities of the user and the robotic counterpart of the user (see Pages 68-69, Section C., Steps 1)-5) "Different HRC tasks are identified before creating a DT system … In this step, a workflow is generated to illustrate the tasks being carried out and the contributions of each component of the system during the processing these tasks are also defined in this step."; Page 70, all);
creating a real-time digital twin simulation of the user and the robotic counterpart based on the activities (see Fig. 1. all, especially "Real-time data" and "Real-time Control"; Page 68, Section B. "Generic Digital Twin Model and Design Process", all and Section C., Step 2) "Various digital models of the physical entities are created."; Page 69, Section C., Step 8) "A perception module is also employed to monitor the real-time robot execution, human behaviours and changes in the environment."; Page 70, all);
analyzing the real-time digital twin simulation based on the activities of the user and robotic counterpart, wherein the analyzing comprises of, identifying skills, capabilities of user and the robotic counterpart, predicting whether the user can complete the activities based on a work criteria (see Fig. 6, Motion Module and Capability Module; Fig. 7, all; Page 69, Section C., Step 3) "The sequence of operations is simulated and evaluated through the DT models to create an initial task allocation." and Step 4) "In this step, a set of criteria such as the cycle time, the cost and the ergonomics of various task sequences are simulated through the DT framework to produce a set of resource assignments, which include the selections of robots, fixture, grippers and tools. And meanwhile, in order to maximise the abilities of both the operator(s) and robot(s), an optimal workstation layout design will be developed to illustrate the places of various resources, components and equipment to be located."; Pages. 70-71, all, especially Section 4)a) "The third unit is the capability unit. In this unit, the human abilities and skills, as well as physical characteristics of human, are assessed. The operator's psychological well-being and satisfaction must be fulfilled when developing an HRC system. In HRC systems, ergonomics can be assessed through physical and cognitive aspects." and Section 4)c) "The second unit is the degrees of collaboration unit, which defines the degree of task intersection and dependency between the operator and the cobot. This unit draws a clear line between the various required capabilities of cobots in different industrial scenarios."); and
collecting a first historical data of the activities of the user and the robotic counterpart and a second historical data from the real-time digital twin simulation (see Tables I-II, all; the feedback loop of Fig. 4; Fig. 6, "Prior Experiences" and "Prior Activities"; Pages 68-69, Section C., Steps 1)-5) "Different HRC tasks are identified before creating a DT system … In this step, a workflow is generated to illustrate the tasks being carried out and the contributions of each component of the system during the processing these tasks are also defined in this step.").
Mori in view of Lu teaches each and every claim element of at least the independent claims, as discussed above. For the sake of compact prosecution and for the possible argument that "Mori is silent regarding determining whether quality of work performed by the user falls under a predetermined threshold of the work criteria; and in responsive to the quality of work performed by the user falls under the predetermined threshold, generating one or more solutions", Timmons explicitly teaches the claim limitations.
That is, Timmons teaches a computer-implemented method for managing activities of user and robotic counterpart of the user (see all Figs., especially Figs. 5-6; Col. 3, lines 27-49), the computer-implemented method comprising:
receiving data from a device, wherein the user are equipped with the device (see Fig. 2, motion capture system 220; Fig. 6, step 610; Col. 3, lines 45-49; Col. 17, lines 17-24 "The method 600 begins at block 610, where the motion capture system 220 monitors behavior of a user performing a fulfillment operation in order to collect an instance of motion data. As discussed above, the motion capture system 220 could monitor the user's behavior using motion capture devices 375, such as motion capture camera devices, wearable motion capture equipment, and so on.");
analyzing the data associated with the user (see Fig. 6, steps 610-625; Col. 3, lines 45-49; Col. 17, lines 15-67, especially "In the depicted embodiment, the efficiency evaluation component 330 analyzes each temporal chunk using a respective one or more data models to determine a measure of efficiency for the temporal chunk (block 625).");
determining the activities of the user (see Figs. 5-6, all; Col. 3, lines 27-49; Col. 15, line 50 - Col. 17, line 67, especially "A task analysis component 325 can divide each instance of motion data into a plurality of temporal chunks, based on a set of predefined tasks specified in the task definitions 350 (block 515)." and "The task analysis component 325 divides the instance of motion data into a plurality of temporal chunks, based on a set of predefined sub-tasks that make-up an iteration of the fulfillment operation (block 615).");
creating a digital twin simulation of the user based on the activities (see Fig. 5, all; Col. 3, lines 27-49; Col. 15, line 50 - Col. 17, line 13, especially "The data model training component 320 then trains one or more data models, using the determined subsets (block 530), and the method 500 ends. For instance, the data model training component 320 could train a neural network to evaluate instances of motion data in order to rate the performance of the user from which the motion data was collected. For example, such a neural network could receive as inputs the temporal chunks of motion data corresponding to the set of predefined sub-tasks, and could output one or more measures of quality describing how well the user performed each sub-task, relative to the provided positive samples of motion data. In a particular embodiment, the data model training component 320 is configured to train a neural network to output multiple measures of quality for each of the sub-tasks, e.g., a rating of how ergonomically the task was performed, a rating of how safely the task was performed, a rating of how efficiently the task was performed, etc.");
analyzing the digital twin simulation based on the activities of the user, wherein the analyzing comprises of, identifying skills, capabilities of user, predicting whether the user can complete the activities based on a work criteria (see Fig. 5, all; Fig. 7, all; Col. 3, lines 27-49; Col. 15, line 50 - Col. 17, line 13, especially "The data model training component 320 then trains one or more data models, using the determined subsets (block 530), and the method 500 ends. For instance, the data model training component 320 could train a neural network to evaluate instances of motion data in order to rate the performance of the user from which the motion data was collected. For example, such a neural network could receive as inputs the temporal chunks of motion data corresponding to the set of predefined sub-tasks, and could output one or more measures of quality describing how well the user performed each sub-task, relative to the provided positive samples of motion data. In a particular embodiment, the data model training component 320 is configured to train a neural network to output multiple measures of quality for each of the sub-tasks, e.g., a rating of how ergonomically the task was performed, a rating of how safely the task was performed, a rating of how efficiently the task was performed, etc.");
collecting a first historical data of the activities of the user and a second historical data from the digital twin simulation (see Fig. 5, step 510 and Fig. 6, step 610; Col. 3, lines 27-49; Col. 15, lines 54-62, "As shown, the method 500 begins at block 510, where a motion capture system 220 monitors behavior of users performing fulfillment operations within a fulfillment environment to collect a plurality of instances of motion data. In doing so, the motion capture system 220 can collect data from a plurality of motion capture devices 375, such as motion capture camera devices, wearable motion capture equipment and so on. The plurality of instances of motion data can be transmitted to a motion analysis system 310 for use in training the one or more data models."; Col. 17, lines 17-24 "The method 600 begins at block 610, where the motion capture system 220 monitors behavior of a user performing a fulfillment operation in order to collect an instance of motion data. As discussed above, the motion capture system 220 could monitor the user's behavior using motion capture devices 375, such as motion capture camera devices, wearable motion capture equipment, and so on.");
comparing the data of the user against the robotic counterpart based on the work criteria (see Fig. 6, steps 625-630; Fig. 7, all; Col. 3, lines 45-49; Col. 17, line 43 - Col. 18, line 32, especially "In the depicted embodiment, the efficiency evaluation component 330 analyzes each temporal chunk using a respective one or more data models to determine a measure of efficiency for the temporal chunk (block 625) ... In one embodiment, the efficiency evaluation component 330 is configured to use a respective one or more machine learning data models to analyze temporal chunks for each of the plurality of sub-tasks. For example, for a sub-task that requires the employee to pick a product out of a container, the efficiency evaluation component 330 could analyze the temporal chunk of motion data corresponding to the sub-task using a respective one or more machine learning models that are configured to analyze motion data for the specific sub-task according to one or more metrics. In a particular embodiment, the efficiency evaluation component 330 is configured to use a respective machine learning model for each metric and for each sub-task when analyzing the temporal chunks of motion data.");
determining whether quality of work performed by the user falls under a predetermined threshold of the work criteria (see Fig. 6, step 630; Fig. 7, "Quality"; Col. 12, line 58 - Col. 13, line 5, "In one embodiment, the efficiency evaluation component 330 is configured to provide feedback on the user's performance of the fulfillment operation using the immersive reality device 390. The immersive reality device 390 could represent, for example, an augmented reality or a virtual reality headset. As shown, the immersive reality device 390 includes a rendering component 392 and one or more display devices 394. For example, the efficiency evaluation component 330 could determine that the user performed a particular sub-task poorly (e.g., where the measure of quality calculated for the sub-task is less than a predefined threshold level of quality) and, in response, could transmit instructions to the rendering component 392 to generate a graphical depiction of the user's movement in performing the sub-task."; Col. 18, lines 12-16);
in responsive to the quality of work performed by the user falls under the predetermined threshold, generating one or more solutions in order to improve the work criteria of the user (see Col. 12, lines 29-53; Col. 12, line 65 - Col. 13, line 14 "For example, the efficiency evaluation component 330 could determine that the user performed a particular sub-task poorly (e.g., where the measure of quality calculated for the sub-task is less than a predefined threshold level of quality) and, in response, could transmit instructions to the rendering component 392 to generate a graphical depiction of the user's movement in performing the sub-task. In doing so, the efficiency evaluation component 330 could transmit a temporal chunk of motion data corresponding to the user's performance of the sub-task in question to the immersive reality device 390, and the rendering component 392 could render frames depicting the motion data being applied to a virtual avatar (e.g., an avatar that is configured with movement constraints that depict the realistic movement of human limbs and joints). The rendered frames could then be output using the display device(s) 394, so that the user see how he performed the sub-task and better understand why his performance was sub-optimal."); and
executing the one or more solutions with new instructions to the user (see Col. 12, lines 29-53; Col. 12, line 65 - Col. 13, line 14 "For example, the efficiency evaluation component 330 could determine that the user performed a particular sub-task poorly (e.g., where the measure of quality calculated for the sub-task is less than a predefined threshold level of quality) and, in response, could transmit instructions to the rendering component 392 to generate a graphical depiction of the user's movement in performing the sub-task. In doing so, the efficiency evaluation component 330 could transmit a temporal chunk of motion data corresponding to the user's performance of the sub-task in question to the immersive reality device 390, and the rendering component 392 could render frames depicting the motion data being applied to a virtual avatar (e.g., an avatar that is configured with movement constraints that depict the realistic movement of human limbs and joints). The rendered frames could then be output using the display device(s) 394, so that the user see how he performed the sub-task and better understand why his performance was sub-optimal.").
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the process/computer program product/computer system of Mori to further include steps of determining activities of a robotic counterpart of the user, creating a real-time digital twin simulation of the user and the robotic counterpart, analyzing the real-time digital twin simulation based on the activities of the robotic counterpart, and collecting the first historical data of the activities the robotic counterpart, as taught by Lu, in order to implement a flexible and automatic system for handling complex activities with high efficiency by providing a flexible and compatible solution to ease the implementation of human-robot collaboration in the real world.
It additionally would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the process/computer program product/computer system of Mori to further include steps of determining whether quality of work performed by the user falls under a predetermined threshold of the work criteria and in responsive to the quality of work performed by the user falls under the predetermined threshold, generating one or more solutions in order to improve the work criteria of the user, as taught by Timmons, in order to improve user work quality by displaying a relatively low quality task performed by the user to the user so that they better understand why their performance was sub-optimal.
Regarding Claims 2 and 9
Modified Mori teaches the computer-implemented method of claim 1 and computer program product of claim 8 (as discussed above in claims 1 and 8),
Mori further teaches wherein the data is physiological data from the user comprising of skin response (see [0015 "For example, the movement data may be acquired by observing … a galvanic skin reflex (GSR) and so forth."], [0164] and [0181]).
Regarding Claims 5, 12 and 18
Modified Mori teaches the computer-implemented method of claim 1, the computer program product of claim 8 and the computer system of claim 15 (as discussed above in claims 1, 8 and 15),
Mori further teaches wherein the predetermined threshold is user configurable (see [0101 "The standard time represents a time spent by a standard worker to complete the work step 40, and may be designated as appropriate by operator's input".] and [0127 "For example, the skilled operator may be designated as a model operator via input from an operator. In this case, the control unit 11 acquires the movement data of the designated skilled operator as the movement data 70 of a model operator."]-[0132]).
Regarding Claim 17
Modified Mori teaches the computer system of claim 15 (as discussed above in claim 15),
Mori further teaches wherein the digital twin simulation comprises at least one of: i) predicting whether the user can complete the activities based on a required quality criteria, ii) predicting whether the user can complete the activities based on the required quality criteria, iv) organizing and/or tracking the one or more skillsets and capabilities of the user (see [0012 "...skill-level calculation unit configured to calculate the skill level of the target operator with respect to the work step by analyzing the movement data acquired, the skill level indicating a degree on a spectrum of whether or not the target operator can suitably accomplish the work step..."], [0013 "The expression “can suitably accomplish the work step” signifies the ability to complete the target work step 40 to at a standard quality within a standard time. The work support device according to the configuration compares the acquired movement data of the model operator with the movement data of the target operator and determines an instruction that helps the movement of the target operator to approach the movement of the model operator with respect to the work."], [0023]-[0024 "The database in the work support device according to one or more aspects may store a plurality of pieces of movement data each corresponding to a skilled operator among a plurality of skilled operators capable of suitably accomplishing the work step as the movement data of the model operator. "], [0062] and [0100]-[0101]).
Lu additionally teaches wherein the real-time digital twin simulation comprises at least one of: ii) predicting whether robotic counterpart should assist the user to complete the activities, iv) organizing and/or tracking the one or more skillsets and capabilities of the user and vi) determining whether to combine both the user and the robotic counterpart to complete the activities (see Fig. 6, Capability Module and Motion Module; Page 69, Section C., Step 4) "In this step, a set of criteria such as the cycle time, the cost and the ergonomics of various task sequences are simulated through the DT framework to produce a set of resource assignments, which include the selections of robots, fixture, grippers and tools. And meanwhile, in order to maximise the abilities of both the operator(s) and robot(s), an optimal workstation layout design will be developed to illustrate the places of various resources, components and equipment to be located."; Page 70, Section 4)a) "Current human intension can be derived from past activities and activities planned for the future and action recognition by using a multi-modal sensor fusion [14] is a popular approach to predict human intension. The third unit is the capability unit. In this unit, the human abilities and skills, as well as physical characteristics of human, are assessed. The operator's psychological well-being and satisfaction must be fulfilled when developing an HRC system. In HRC systems, ergonomics can be assessed through physical and cognitive aspects.").
Claims 3-4, 10-11 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Mori (as modified by Lu and Timmons) as applied to claims 2, 9 and 15 above, and further in view of Horvitz et al. (US 20100332281 A1 and Horvitz hereinafter) and Duarte De Oliveira et al. (US 20210373664 A1 and Duarte hereinafter).
Regarding Claims 3 and 10
Modified Mori teaches the computer-implemented method of claim 2 and the computer program product of claim 9 (as discussed above in claims 2 and 9),
Mori further teaches further comprising:
determining one or more skillsets and capabilities of the user to perform the activities based on the physiological data (see [0012 "...a skill-level calculation unit configured to calculate the skill level of the target operator with respect to the work step by analyzing the movement data acquired, the skill level indicating a degree on a spectrum of whether or not the target operator can suitably accomplish the work step..."]-[0015 "The type of sensor does not need to be specifically defined as long as the device can measure a physiological parameter associated with the movement of the target operator..."], [0023]-[0024] and [0164]).
Mori is silent regarding determining one or more skillsets and capabilities of robotic counterpart to perform the activities;
assigning new activities based on the one or more skillsets and capabilities to the user and robotic counterpart of the user;
aggregating the one or more skillsets and capabilities of the user and the robotic counterpart of the user; and
communicating to the users via augmented reality (AR) device.
Lu teaches further comprising:
determining one or more skillsets and capabilities of the user and robotic counterpart to perform the activities based on the physiological data (see Fig. 6, Capability Module; Page 70, Section 4)a) "Current human intension can be derived from past activities and activities planned for the future and action recognition by using a multi-modal sensor fusion [14] is a popular approach to predict human intension. The third unit is the capability unit. In this unit, the human abilities and skills, as well as physical characteristics of human, are assessed. The operator's psychological well-being and satisfaction must be fulfilled when developing an HRC system. In HRC systems, ergonomics can be assessed through physical and cognitive aspects." and Section 4)c) "The second unit is the degrees of collaboration unit, which defines the degree of task intersection and dependency between the operator and the cobot. This unit draws a clear line between the various required capabilities of cobots in different industrial scenarios."); and
assigning new activities based on the one or more skillsets and capabilities to the user and robotic counterpart of the user (see Page 69, Section C., Step 4 "In this step, a set of criteria such as the cycle time, the cost and the ergonomics of various task sequences are simulated through the DT framework to produce a set of resource assignments, which include the selections of robots, fixture, grippers and tools. And meanwhile, in order to maximise the abilities of both the operator(s) and robot(s), an optimal workstation layout design will be developed to illustrate the places of various resources, components and equipment to be located.").
Horvitz teaches a computer-implemented method for managing activities of user and robotic counterpart of the user (see all Figs.; [0003]), the computer-implemented method comprising:
further comprising:
determining one or more skillsets and capabilities of the user and robotic counterpart to perform the activities (see Figs. 1-3, all; Abstract "A set of agents and components (potentially including both human agents and automated agent, and components for sensing and effecting action in the world) may also be defined, and each agent may have a set of agent capabilities representing skills, knowledge, resources, relationships, etc., that an agent may commit to a task."; [0003]-[0006] and [0034 "where each agent 16 has at least one agent capability 18, such as a skill that may be performed by the agent 16"]);
assigning new activities based on the one or more skillsets and capabilities to the user and robotic counterpart of the user (see Figs. 1-3, all; Abstract "In turn, each task may be mapped to a set of task capabilities that are involved in completing the task...The tasks of the projects may be fulfilled by identifying coalitions of agents for respective tasks, featuring a sufficient set of agent capabilities corresponding to the task capabilities."; [0003]-[0006]); and
aggregating the one or more skillsets and capabilities of the user and the robotic counterpart of the user (see Figs. 1-3 and 5, all; Abstract "In turn, each task may be mapped to a set of task capabilities that are involved in completing the task...The tasks of the projects may be fulfilled by identifying coalitions of agents for respective tasks, featuring a sufficient set of agent capabilities corresponding to the task capabilities."; [0003]-[0006]).
Duarte teaches a computer-implemented method for managing activities of user (see all Figs.; [0004]-[0011]), the computer-implemented method comprising:
receiving data from a BMI (brain machine interface) device, wherein the user are equipped with the BMI device (see [0007], [0030]-[0031 "The physiological-state-sensing system may comprise one or more sensors for sensing a physiological state of the worker...While the physiological-state-sensing system could comprise a sensor or imaging system, such as an electroencephalogram (EEG) or functional near-infrared spectroscopy (fNIRS) system, for direct monitoring of the worker's brain (e.g., for measuring neural activity)..."] and [0097]);
analyzing the data from the BMI device associated with the user, wherein the data comprises brain signals relating to activities performed by the user (see [0009], [0031] and [0097]-[0098]);
wherein the data is physiological data from the user comprising of skin response (see [0031 "The system may detect or measure one or more properties of the worker's muscular, circulatory, respiratory and/or integumentary systems. In some embodiments, the system may measure or detect any one or more of: breathing rate; heart rate; blood volume pulse; heart-rate variability; sweat; skin conductance..."] and [0154]);
further comprising:
determining one or more skillsets and capabilities of the user to perform the activities based on the physiological data (see [0098], [0100]-[0103], [0107] and [0174]-[0181]); and
communicating to the users via augmented reality (AR) device (see Figs. 2-4, all; [0011], [0101]-[0102] and [0106]-[0107]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to further modify the process/computer program product of modified Mori to include steps of assigning new activities based on the one or more skillsets and capabilities to the user and robotic counterpart of the user and aggregating the one or more skillsets and capabilities of the user and the robotic counterpart of the user, as taught by Horvitz, in order to allocate resources capable of performing the activities associated with a project in a more efficient manner.
It further would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the process/computer program product of modified Mori to include a step of communicating to the users via augmented reality (AR) device, as taught by Duarte, in order to provide a user with a high degree of knowledge about how the activities should be performed.
Regarding Claims 4 and 11
Modified Mori teaches the computer-implemented method of claim 3 and the computer program product of claim 10 (as discussed above in claims 3 and 10),
Mori further teaches wherein the digital twin simulation comprises at least one of: i) predicting whether the user can complete the activities based on a required quality criteria, ii) predicting whether the user can complete the activities based on the required quality criteria, iv) organizing and/or tracking the one or more skillsets and capabilities of the user (see [0012 "...skill-level calculation unit configured to calculate the skill level of the target operator with respect to the work step by analyzing the movement data acquired, the skill level indicating a degree on a spectrum of whether or not the target operator can suitably accomplish the work step..."], [0013 "The expression “can suitably accomplish the work step” signifies the ability to complete the target work step 40 to at a standard quality within a standard time. The work support device according to the configuration compares the acquired movement data of the model operator with the movement data of the target operator and determines an instruction that helps the movement of the target operator to approach the movement of the model operator with respect to the work."], [0023]-[0024 "The database in the work support device according to one or more aspects may store a plurality of pieces of movement data each corresponding to a skilled operator among a plurality of skilled operators capable of suitably accomplishing the work step as the movement data of the model operator. "], [0062] and [0100]-[0101]).
Lu additionally teaches wherein the real-time digital twin simulation comprises at least one of: ii) predicting whether robotic counterpart should assist the user to complete the activities, iv) organizing and/or tracking the one or more skillsets and capabilities of the user and vi) determining whether to combine both the user and the robotic counterpart to complete the activities (see Fig. 6, Capability Module and Motion Module; Page 69, Section C., Step 4) "In this step, a set of criteria such as the cycle time, the cost and the ergonomics of various task sequences are simulated through the DT framework to produce a set of resource assignments, which include the selections of robots, fixture, grippers and tools. And meanwhile, in order to maximise the abilities of both the operator(s) and robot(s), an optimal workstation layout design will be developed to illustrate the places of various resources, components and equipment to be located."; Page 70, Section 4)a) "Current human intension can be derived from past activities and activities planned for the future and action recognition by using a multi-modal sensor fusion [14] is a popular approach to predict human intension. The third unit is the capability unit. In this unit, the human abilities and skills, as well as physical characteristics of human, are assessed. The operator's psychological well-being and satisfaction must be fulfilled when developing an HRC system. In HRC systems, ergonomics can be assessed through physical and cognitive aspects.").
Regarding Claim 16
Modified Mori teaches the computer system of claim 15 (as discussed above in claim 15),
Mori further teaches further comprising:
wherein the data is physiological data from the user consisting of brain waves and/or skin response (see [0015 "For example, the movement data may be acquired by observing … a galvanic skin reflex (GSR) and so forth. Further, for example, a sensor may be used as a camera, a motion capture, a load cell, an electroencephalograph (EEG), a magnetoencephalography (MEG), a magnetic resonance imaging (MRI) configured to capture a blood flow associated with a brain activity using the functional magnetic resonance imaging (fMRI), a brain activity measuring device configured to measure a brain blood flow."], [0164] and [0181]) further comprising:
determining one or more skillsets and capabilities of the user to perform the activities based on the physiological data (see [0012 "...a skill-level calculation unit configured to calculate the skill level of the target operator with respect to the work step by analyzing the movement data acquired, the skill level indicating a degree on a spectrum of whether or not the target operator can suitably accomplish the work step..."]-[0015 "The type of sensor does not need to be specifically defined as long as the device can measure a physiological parameter associated with the movement of the target operator..."], [0023]-[0024] and [0164]).
Mori is silent regarding determining one or more skillsets and capabilities of robotic counterpart to perform the activities;
assigning new activities based on the one or more skillsets and capabilities to the user and robotic counterpart of the user;
aggregating the one or more skillsets and capabilities of the user and the robotic counterpart of the user; and
communicating to the users via augmented reality (AR) device.
Lu teaches further comprising:
determining one or more skillsets and capabilities of the user and robotic counterpart to perform the activities based on the physiological data (see Fig. 6, Capability Module; Page 70, Section 4)a) "Current human intension can be derived from past activities and activities planned for the future and action recognition by using a multi-modal sensor fusion [14] is a popular approach to predict human intension. The third unit is the capability unit. In this unit, the human abilities and skills, as well as physical characteristics of human, are assessed. The operator's psychological well-being and satisfaction must be fulfilled when developing an HRC system. In HRC systems, ergonomics can be assessed through physical and cognitive aspects." and Section 4)c) "The second unit is the degrees of collaboration unit, which defines the degree of task intersection and dependency between the operator and the cobot. This unit draws a clear line between the various required capabilities of cobots in different industrial scenarios.") and
assigning new activities based on the one or more skillsets and capabilities to the user and robotic counterpart of the user (see Page 69, Section C., Step 4 "In this step, a set of criteria such as the cycle time, the cost and the ergonomics of various task sequences are simulated through the DT framework to produce a set of resource assignments, which include the selections of robots, fixture, grippers and tools. And meanwhile, in order to maximize the abilities of both the operator(s) and robot(s), an optimal workstation layout design will be developed to illustrate the places of various resources, components and equipment to be located.").
Horvitz teaches further comprising:
determining one or more skillsets and capabilities of the user and robotic counterpart to perform the activities (see Figs. 1-3, all; Abstract "A set of agents and components (potentially including both human agents and automated agent, and components for sensing and effecting action in the world) may also be defined, and each agent may have a set of agent capabilities representing skills, knowledge, resources, relationships, etc., that an agent may commit to a task."; [0003]-[0006] and [0034 "where each agent 16 has at least one agent capability 18, such as a skill that may be performed by the agent 16"]);
assigning new activities based on the one or more skillsets and capabilities to the user and robotic counterpart of the user (see Figs. 1-3, all; Abstract "In turn, each task may be mapped to a set of task capabilities that are involved in completing the task...The tasks of the projects may be fulfilled by identifying coalitions of agents for respective tasks, featuring a sufficient set of agent capabilities corresponding to the task capabilities."; [0003]-[0006]); and
aggregating the one or more skillsets and capabilities of the user and the robotic counterpart of the user (see Figs. 1-3 and 5, all; Abstract "In turn, each task may be mapped to a set of task capabilities that are involved in completing the task...The tasks of the projects may be fulfilled by identifying coalitions of agents for respective tasks, featuring a sufficient set of agent capabilities corresponding to the task capabilities."; [0003]-[0006]).
Duarte teaches wherein the data is physiological data from the user consisting of brain waves and/or skin response (see Fig. 7. sensor signals 702; [0031 "The physiological-state-sensing system may comprise one or more sensors for sensing a physiological state of the worker...While the physiological-state-sensing system could comprise a sensor or imaging system, such as an electroencephalogram (EEG) or functional near-infrared spectroscopy (fNIRS) system, for direct monitoring of the worker's brain (e.g., for measuring neural activity)..."], [0097] and [0154]-[0155]( further comprising:
wherein the data is physiological data from the user consisting of brain waves and/or skin response (see Fig. 7. sensor signals 702; [0031 "The physiological-state-sensing system may comprise one or more sensors for sensing a physiological state of the worker...While the physiological-state-sensing system could comprise a sensor or imaging system, such as an electroencephalogram (EEG) or functional near-infrared spectroscopy (fNIRS) system, for direct monitoring of the worker's brain (e.g., for measuring neural activity)..."], [0097] and [0154]-[0155]);
further comprising:
determining one or more skillsets and capabilities of the user to perform the activities based on the physiological data (see [0098], [0100]-[0103], [0107] and [0174]-[0181]); and
communicating to the users via augmented reality (AR) device (see Figs. 2-4, all; [0011], [0101]-[0102] and [0106]-[0107]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to further modify the computer system of modified Mori to include steps of assigning new activities based on the one or more skillsets and capabilities to the user and robotic counterpart of the user and aggregating the one or more skillsets and capabilities of the user and the robotic counterpart of the user, as taught by Horvitz, in order to allocate resources capable of performing the activities associated with a project in a more efficient manner.
It further would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the computer system of modified Mori to include a step of communicating to the user via augmented reality (AR) device, as taught by Duarte, in order to provide a user with a high degree of knowledge about how the activities should be performed.
Claims 6, 13 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Mori (as modified by Lu and Timmons) as applied to claims 1, 8 and 15 above, and further in view of Saur et al. (US 20220039883 A1 and Saur hereinafter).
Regarding Claims 6, 13 and 19
Modified Mori teaches the computer-implemented method of claim 1, the computer program product of claim 8 and the computer system of claim 15 (as discussed above in claims 1, 8 and 15),
Mori further teaches wherein comparing the data against the digital twin simulation based on the work criteria wherein the work criteria includes efficiency, quality, and cost (see Fig. 15, all; [0013], [0062], [0100]-[0101] and [0190]), further comprising:
determining whether the user can complete the activities within an allotted timeframe (see Fig. 15, all; [0013 "The expression “can suitably accomplish the work step” signifies the ability to complete the target work step 40 to at a standard quality within a standard time."], [0062], [0100]-[0101] and [0190]); and
determining whether the user can complete the activities without sacrificing quality (see [0013 "The expression “can suitably accomplish the work step” signifies the ability to complete the target work step 40 to at a standard quality within a standard time."], [0062], [0100]-[0101] and [0190]).
Mori is silent regarding determining whether the user can complete the activities within an allotted cost; and
determining whether the user can complete the activities without sacrificing safety.
Saur teaches a computer-implemented method for managing activities of user and robotic counterpart of the user (see all Figs.; [0001] and [0006]-[0007]), the computer-implemented method comprising:
receiving data from a device (see [0001 "...at least one robot sensor for providing robot sensor data giving an actuator feedback signal and depending on conditions in the surgical field…"] and [0012 "The robot sensor detects a state of the moveable robot member and of the actuator and creates sensor data giving an actuator feedback signal."]);
analyzing the data associated with the user (see [0001 "...a control device for controlling the actuator according to a control program and under feedback of the robot sensor data."] and [0012 "he actuator feedback signal represents information about the actual state of the moveable robot member and the actuator, e.g., pose, temperature, etc. The actuator feedback signal depends on conditions of the surgical field, e.g., body temperature, cardiac status, pulse, blood pressure, type of tissue, stiffness of tissue, etc."]);
determining the activities of the user and the robotic counterpart of the user (see [0001 "...a control device for controlling the actuator according to a control program and under feedback of the robot sensor data."] and [0013 "The patient sensor detects a state of the patient caused by the action of the surgical robot and provides patient sensor data resulting in a patient feedback signal."]);
creating a real-time digital twin simulation of the user and the robotic counterpart based on the activities (see [0009 "To simulate the surgical robot in the learning mode the processing unit includes and utilizes a virtual surgical robot. The virtual surgical robot simulates movement and action of the (real) moveable robot member and the actuator with its kinematic and geometric characteristics by a virtual 3D model of the (real) robot."] and [0010 "The virtual surgical field simulates the (real) surgical field and includes a virtual anatomical model. The virtual anatomical model simulates the patient in form of a virtual 3D model ... In exemplary embodiments, the virtual anatomic model includes a 3D model of an individual surgeon or a surgical team."]);
analyzing the real-time digital twin simulation based on the activities of the user and robotic counterpart (see [0009 "To simulate the surgical robot in the learning mode the processing unit includes and utilizes a virtual surgical robot. The virtual surgical robot simulates movement and action of the (real) moveable robot member and the actuator with its kinematic and geometric characteristics by a virtual 3D model of the (real) robot."] and [0010 "The virtual surgical field simulates the (real) surgical field and includes a virtual anatomical model. The virtual anatomical model simulates the patient in form of a virtual 3D model ... In exemplary embodiments, the virtual anatomic model includes a 3D model of an individual surgeon or a surgical team."]); and
comparing the data of the user against the robotic counterpart based on the work criteria (see [0021 "In other exemplary embodiments, instruments or gestures are tracked by comparing the virtual action of the virtual surgical robot and the real action of the surgical robot based on the target cost function…"]);
wherein comparing the data against the digital twin simulation based on the work criteria wherein the work criteria includes cost (see [0018]-[0021] and [0050]), further comprising:
determining whether the user can complete the activities within an allotted cost (see [0018]-[0021 "In other exemplary embodiments, instruments or gestures are tracked by comparing the virtual action of the virtual surgical robot and the real action of the surgical robot based on the target cost function…"] and [0050]); and
determining whether the user can complete the activities without sacrificing safety (see [0019 "In another exemplary embodiment, a best alignment mode is determined by the target cost function for example by... prohibiting collisions by maximizing distances of the surgeon to other tools, maximizing ergonomics of the surgeon."' and [0025]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to further modify the process/computer program product/computer system of modified Mori to include a step of determining whether the user can complete the activities within an allotted cost without sacrificing safety, as taught by Saur, in order to minimize a target cost function and thus maximize output.
Claims 7, 14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Mori (as modified by Lu and Timmons) as applied to claims 1, 8 and 15 above, and further in view of Horvitz.
Regarding Claims 7, 14 and 20
Modified Mori teaches the computer-implemented method of claim 1, the computer program product of claim 8 and the computer system of claim 15 (as discussed above in claims 1, 8 and 15),
Mori is silent regarding wherein the one or more solutions further comprises: i) validates if an aggregated skills and capabilities from the one or more skills and capabilities can be used for completing the activities, ii) determines that the aggregated skills and capabilities will not be possible to complete the activities, then determines reallocation of the activities, iii) identifies which steps of the activities of the user are to be reallocated to the robotic counterpart, and iv) identifies the activities to be allocated to any existing robotic counterpart or any new robotic counterpart can be allocated to perform the activities.
Horvitz teaches wherein the one or more solutions further comprises: i) validates if an aggregated skills and capabilities from the one or more skills and capabilities can be used for completing the activities (see Fig. 3, all, especially "tasks 22" and "capabilities 24"; [0032]-[0033], especially [0032 "One technique for allocating agents 16 of an agent set 14 to tasks 22 in various projects 20 may involve a repeating of a selection of agents 16 to perform a task 22 that has not yet been fulfilled, until all of the tasks 22 of all of the projects 20 have been fulfilled through the allocation of a suitable set of agents 16...Therefore, the first machine and the second laboratory technician are identified as having a set of agent capabilities 18 that collectively correspond to the task capabilities 24 of the second task 22."] and [0033 "The actions 56 include selecting 58 a coalition 40, comprising at least one agent 16 in the agent set 14, where the agents 16 of the coalition 40 collectively comprise agent capabilities 16 that correspond to the task capabilities 24 of an unfulfilled task 22."]), ii) determines that the aggregated skills and capabilities will not be possible to complete the activities, then determines reallocation of the activities (see Fig. 3, all, especially the updated coalition 40 across each quadrant; [0032]-[0033], especially [0032 "One technique for allocating agents 16 of an agent set 14 to tasks 22 in various projects 20 may involve a repeating of a selection of agents 16 to perform a task 22 that has not yet been fulfilled, until all of the tasks 22 of all of the projects 20 have been fulfilled through the allocation of a suitable set of agents 16...This technique may then be performed by selecting a coalition 40 of agents 16 to fulfill a particular task 22 until all of the tasks 22 have been fulfilled. At a first time point 42, a task 22 may be selected for fulfillment...Additional allocations may be performed at a third time point 46 (allocating a coalition 40 comprising the third laboratory technician and the second machine to fulfill the third task 16) and a fourth time point 48 (allocating a coalition 40 comprising the fifth laboratory technician and the sixth laboratory technician to fulfill the first task 16) to complete the allocation."]), iii) identifies which steps of the activities of the user are to be reallocated to the robotic counterpart (see Fig. 3, all, especially "machine 1" and "task 2" in the first quadrant and "machine 2" and "task 3" in the third quadrant; [0032]-[0033], especially [0032 "Therefore, the first machine and the second laboratory technician are identified as having a set of agent capabilities 18 that collectively correspond to the task capabilities 24 of the second task 22...Additional allocations may be performed at a third time point 46 (allocating a coalition 40 comprising the third laboratory technician and the second machine to fulfill the third task 16) and a fourth time point 48 (allocating a coalition 40 comprising the fifth laboratory technician and the sixth laboratory technician to fulfill the first task 16) to complete the allocation."]), and iv) identifies the activities to be allocated to any existing robotic counterpart or any new robotic counterpart can be allocated to perform the activities (see Fig. 3, all, especially "machine 1" and "task 2" in the first quadrant and "machine 2" and "task 3" in the third quadrant; [0032]-[0033], especially [0032 "Therefore, the first machine and the second laboratory technician are identified as having a set of agent capabilities 18 that collectively correspond to the task capabilities 24 of the second task 22...Additional allocations may be performed at a third time point 46 (allocating a coalition 40 comprising the third laboratory technician and the second machine to fulfill the third task 16) and a fourth time point 48 (allocating a coalition 40 comprising the fifth laboratory technician and the sixth laboratory technician to fulfill the first task 16) to complete the allocation."]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to further modify the process/computer program product/computer system of modified Mori to include the steps to i) validate if an aggregated skills and capabilities from one or more skills and capabilities can be used for completing the activities, ii) determine that the aggregated skills and capabilities will not be possible to complete the activities, then determine reallocation of the activities, iii) identify which steps of the activities of the user are to be reallocated to the robotic counterpart, and iv) identify the activities to be allocated to any existing robotic counterpart or any new robotic counterpart can be allocated to perform the activities, as taught by Horvitz, in order to allocate resources capable of performing the activities associated with a project in a more efficient manner.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TANNER LUKE CULLEN whose telephone number is (303)297-4384. The examiner can normally be reached Monday-Friday 9:00-5:00 MT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached at (571) 272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TANNER L CULLEN/Examiner, Art Unit 3656
/KHOI H TRAN/Supervisory Patent Examiner, Art Unit 3656