DETAILED ACTION
This Non-Final Office Action is in response to preliminary amendments filed 8/1/2024.
Claims 1-20 have been canceled.
Claims 21-40 are new claims.
Claims 21-40 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 7/31/2024 has been considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 21-40 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
101 Analysis of Claim 1
Claim 1. A computer-implemented method of managing a robotic device using variable autonomous control, comprising:
receiving data indicative of an area proximate to the robotic device;
based on the data, generating a scene by:
generating a digital twin and identifying objects within the digital twin;
accessing, from a knowledge database, information about the identified objects;
identifying a set of potential scenes based on a context of the objects; and
selecting a likeliest scene of the set of potential scenes; and
accessing the knowledge database having contextual and semantic labels for assisting the robotic device in executing tasks in context of one or more scenes.
101 Analysis - Step 1: Statutory category - Yes
The claim recites a method including at least one step. The claim falls within one of the four statutory categories. MPEP 2106.03
101 Analysis - Step 2A Prong one evaluation: Judicial Exception - Yes - Mental processes
The claim is to be analyzed to determine whether it recites subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) mental processes, and/or c) certain methods of organizing human activity.
The Office submits that the foregoing bolded limitations constitute judicial exceptions in terms of “mental processes” because under its broadest reasonable interpretation, the claim covers performance using mental processes.
The claim recites the limitation of based on the data, generating a scene. Based on the plain meaning of the terms in light of the Applicant's disclosure, the limitation of “scene” is data representative of place or behavior.
Therefore, this limitation, as drafted, is a simple cognitive process that, under its broadest reasonable interpretation, can be practically covered in the human mind, or by a human using a pen and paper. For example, the claim encompasses a person looking at data collected (i.e. data) and forming a simple observation and evaluation (i.e. generate a scene). Such observations and evaluations are listed as abstract by MPEP 2106.04(a)(2)(III).
The claim recites the limitation of generating a digital twin and identifying objects within the digital twin. Based on the plain meaning of the terms in light of the Applicant's disclosure, the limitation of “digital twin” is data representative of a real-world entities, and the limitation of “objects” is data representative of elements within the digital twin.
Therefore, this limitation, as drafted, is a simple cognitive process that, under its broadest reasonable interpretation, can be practically covered in the human mind, or by a human using a pen and paper. For example, the claim encompasses a person looking at data collected (i.e. digital twin) and forming a simple observation and evaluation (i.e. identify objects within the digital twin). Such observations and evaluations are listed as abstract by MPEP 2106.04(a)(2)(III).
The recitation of a computer as implementing the method is recited at a high level of generality and merely uses a computer (i.e. computer) as a tool to perform the processes (i.e. method) which does not preclude the claims from reciting the abstract process when tested per MPEP 2106.04(a)(2)(III)(C)#3.
The mere nominal recitation of a “robotic device” as being an object assisted by the claimed method does not take the claim limitations out of the mental process grouping.
Thus, the claim recites, describes, or sets forth a mental process.
101 Analysis - Step 2A Prong two evaluation: Practical Application - No
The claim is evaluated for whether, as a whole, it integrates the recited judicial exception into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined potions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”).
The claim recites additional elements of:
receiving data indicative of an area proximate to the robotic device;
accessing, from a knowledge database, information about the identified objects;
identifying a set of potential scenes based on a context of the objects; and
selecting a likeliest scene of the set of potential scenes; and
accessing the knowledge database having contextual and semantic labels for assisting the robotic device in executing tasks in context of one or more scenes.
The “receiving” step is recited at a high level of generality (i.e. as a general receiving of data indicative of an area) and amounts to mere data gathering, which is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The first “accessing” step is recited at a high level of generality (i.e. as a general accessing of information about objects) and amounts to selecting a particular data source or type of data to be manipulated, which is a form of insignificant extra-solution activity. See MPEP 2106.05(g). No technological details are recited with respect to the “knowledge database” itself. Specifically, when tested per MPEP 2106.05(f)(1), such limitation is interpreted as a result-oriented solution rather than an actual technological improvement. Thus, the knowledge database is found not to integrate the abstract idea into a practical application or provide significantly more.
The “identifying” step is recited at a high level of generality (i.e. as a general identifying of potential scenes) and amounts to selecting a particular data source or type of data to be manipulated based upon generally recited data, which is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “selecting” step is recited at a high level of generality (i.e. as a general selecting of a scene) and amounts to selecting a particular data source or type of data to be manipulated, which is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The second “accessing” step is recited at a high level of generality (i.e. as a general accessing of contextual and semantic labels) and amounts to selecting a particular data source or type of data to be manipulated, which is a form of insignificant extra-solution activity. See MPEP 2106.05(g). No technological details are recited with respect to the “knowledge database” itself. Specifically, when tested per MPEP 2106.05(f)(1), such limitation is interpreted as a result-oriented solution rather than an actual technological improvement. Thus, the knowledge database is found not to integrate the abstract idea into a practical application or provide significantly more.
The “robotic device” merely describes how to generally “apply” the otherwise mental judgements in a generic or general-purpose computing environment. The robotic device is recited at a high level of generality and is merely automating a general execution of tasks, which does not integrate the abstract idea into a practical application or provide significantly more. See MPEP 2106.05(f).
101 Analysis - Step 2B evaluation: Inventive concept - No
The claim is evaluated for whether the claim as a whole amounts to significantly more than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim.
As discussed with respect to Step 2A Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception on a generic computer cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B. Here, the receiving, accessing, identifying, and selecting steps were considered to be insignificant extra-solution activity in Step 2A, and thus, they are re-evaluated in Step 2B to determine if they are more than what is well-understood, routine, conventional activity in the field. The background recites the robotic device as being conventional, and the specification does not provide any indication that the knowledge database is anything other than a conventional database. MPEP 2106.05(d)(II), and the cases cited therein, including TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016), OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015), buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014), but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93, indicate that mere collection or receipt of data over a network and storing and retrieving information in memory are well-understood, routine, and conventional functions when claimed in a merely generic manner, as it is here. Thus, the claim is ineligible.
101 Analysis of Dependent Claims 22-33
Dependent claims 22-33 do not recite any further limitations that cause the claims to be patent eligible. Rather, the limitations of the dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application.
Claim 22 recites the additional elements of wherein the knowledge database is generated using machine learning. Further limiting the “knowledge database” to be generated using machine learning represents a mere narrowing of the abstract idea (step 2A prong one) and does not impose meaningful limits on the claim beyond what has already been identified as abstract. Thus, limiting the knowledge database” to be generated using machine learning does not further integrate the abstract idea into a practical application (step 2A prong two) or provide significantly more (step 2B).
Claim 23 recites the additional elements of wherein the task is determined based on a motion planning algorithm, further comprising generating alternative motion plans based on the contextual and semantic labels. Based on the plain meaning of the terms in light of the Applicant's disclosure, the limitation of “motion planning algorithm” is data representative of a calculation designed for motion planning. The broadest reasonable interpretation of “alternative motion plans,” in light of the overall claim and Applicant's disclosure, is data that reflects planned motion. The task and motion plan are merely determined based on the generally recited data and do not require any particular sensors or controlled operations of the robotic device.
Therefore, this limitation, as drafted, is a simple cognitive process that, under its broadest reasonable interpretation, can be practically covered in the human mind, or by a human using a pen and paper. For example, the claim encompasses a person looking at data collected (i.e. motion planning algorithm) and forming a simple observations and evaluations (i.e. determine the task), and a person looking at data collected (i.e. contextual and semantic labels) and forming simple evaluations (i.e. generating alternative motion plans). Such evaluations and observations are listed as abstract by MPEP 2106.04(a)(2)(III).
Based on the tests above, the Examiner finds that the additional elements do not integrate the abstract idea into a practical application (step 2A prong two) or provide significantly more (step 2B).
Claim 24 recites the additional elements of the method further comprising:
based on the knowledge database, identifying a task associated with the scene;
dividing the task into sub-tasks; and
determining a risk threshold based on the scene, the sub-tasks, and one or more trust thresholds.
The “dividing” step, as drafted, is a simple cognitive process that, under its broadest reasonable interpretation, can be practically covered in the human mind, or by a human using a pen and paper. For example, the claim encompasses a person looking at data collected (i.e. task) and forming a simple observation and evaluation (i.e. dividing the task into sub-tasks). Such observations and evaluations are listed as abstract by MPEP 2106.04(a)(2)(III).
The “determining” step, as drafted, is a simple cognitive process that, under its broadest reasonable interpretation, can be practically covered in the human mind, or by a human using a pen and paper. For example, the claim encompasses a person looking at data collected (i.e. scene, sub-tasks, and trust thresholds) and forming a simple evaluation (i.e. determine a risk threshold). Such evaluations are listed as abstract by MPEP 2106.04(a)(2)(III).
The “identifying” step is recited at a high level of generality (i.e. as a general identifying of a task associated with the scene) and amounts to selecting a particular data source or type of data to be manipulated, which is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
No technological details are recited with respect to the “knowledge database” itself. Specifically, when tested per MPEP 2106.05(f)(1), such limitation is interpreted as a result-oriented solution rather than an actual technological improvement. Thus, the knowledge database is found not to integrate the abstract idea into a practical application or provide significantly more.
Based on the tests above, the Examiner finds that the additional elements do not integrate the abstract idea into a practical application (step 2A prong two) or provide significantly more (step 2B).
Claim 25 recites the additional elements of:
based on the risk threshold, determining a ratio of the sub-tasks to be controlled by a user;
in accordance with the risk threshold, receiving a user input for controlling one or more of the sub-tasks when the ratio dictates that at least one of the sub-tasks requires the user input; and
causing performance of the sub-tasks by the robotic device.
The “determining” step, as drafted, is a simple cognitive process that, under its broadest reasonable interpretation, can be practically covered in the human mind, or by a human using a pen and paper. For example, the claim encompasses a person looking at data collected (i.e. sub-tasks and risk threshold) and forming a simple observation and evaluation (i.e. determine a ratio of the sub-tasks to be controlled by a user). Such observations and evaluations are listed as abstract by MPEP 2106.04(a)(2)(III).
The “receiving” step is recited at a high level of generality (i.e. as a general receiving of user input for controlling sub-tasks when dictated by the ratio) and amounts to mere data gathering upon a generally recited condition, which is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “causing” step is recited at a high level of generality (i.e. as a general causing of performance of a sub-task) and amounts to post-solution activity, which is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The “robotic device” contributes only nominally or insignificantly to the execution of the claimed method (e.g., in an insignificant extra-solution activity step or in a field-of-use limitation) and is merely an object on which the method operates (e.g., performs the sub-tasks); therefore, the limitation involving the “robotic device” does not integrate the abstract idea into a practical application or provide significantly more. See MPEP 2106.05(b).
Based on the tests above, the Examiner finds that the additional elements do not integrate the abstract idea into a practical application (step 2A prong two) or provide significantly more (step 2B).
Claim 26 recites the additional elements of simulating and evaluating a result of the task from the set of potential tasks. This limitation, as drafted, is a simple cognitive process that, under its broadest reasonable interpretation, can be practically covered in the human mind, or by a human using a pen and paper. For example, the claim encompasses a person looking at data collected (i.e. set of potential tasks) and forming a simple observation and evaluation (i.e. simulating and evaluating a result of the task from the set of potential tasks). Such observations and evaluations are listed as abstract by MPEP 2106.04(a)(2)(III).
No technological details are recited with respect to the “simulating” itself. Specifically, when tested per MPEP 2106.05(f)(1), such limitation is interpreted as a result-oriented solution rather than an actual technological improvement. Thus, the “simulating” step is found not to integrate the abstract idea into a practical application or provide significantly more.
Based on the tests above, the Examiner finds that the additional elements do not integrate the abstract idea into a practical application (step 2A prong two) or provide significantly more (step 2B).
Claim 27 recites the additional elements of wherein at least one of:
the user input comprises a feedback loop with the user when user input is needed, and the ratio is progressively updated over time based on updates to the knowledge database and the risk threshold.
The “updating” step, as drafted, is a simple cognitive process that, under its broadest reasonable interpretation, can be practically covered in the human mind, or by a human using a pen and paper. For example, the claim encompasses a person looking at data collected (i.e. the ratio, updates to the knowledge database, and the risk threshold) and forming a simple observations and evaluations (i.e. progressively updating the ratio over time). Such observations and evaluations are listed as abstract by MPEP 2106.04(a)(2)(III).
Further limiting the “user input” to include a feedback loop with the user represents a mere narrowing of the abstract idea (step 2A prong one) and does not impose meaningful limits on the claim beyond what has already been identified as abstract. Thus, limiting the user input to include a feedback loop with the user does not further integrate the abstract idea into a practical application (step 2A prong two) or provide significantly more (step 2B).
Claim 28 recites the additional elements of wherein:
the scene is generated by generating a digital twin and identifying objects within the digital twin; and
the user input is received via a user interface to the digital twin.
The “generating” step, as drafted, is a simple cognitive process that, under its broadest reasonable interpretation, can be practically covered in the human mind, or by a human using a pen and paper. For example, the claim encompasses a person looking at data collected (i.e. digital twin) and forming a simple observation and evaluation (i.e. generating a scene and identifying objects within the digital twin). Such observations and evaluations are listed as abstract by MPEP 2106.04(a)(2)(III).
Further limiting the “user input” to be received via a user interface to the digital twin represents a mere narrowing of the abstract idea (step 2A prong one) and does not impose meaningful limits on the claim beyond what has already been identified as abstract. Thus, limiting the user input to be received via a user interface to the digital twin does not further integrate the abstract idea into a practical application (step 2A prong two) or provide significantly more (step 2B).
No technological details are recited with respect to the “user interface” itself. Specifically, when tested per MPEP 2106.05(f)(1), such limitation is interpreted as a result-oriented solution rather than an actual technological improvement. Thus, the user interface is found not to integrate the abstract idea into a practical application or provide significantly more.
Claim 29 recites the additional elements of determining one or more intervention objectives usable to determine the task based on the scene. Based on the plain meaning of the terms in light of the Applicant's disclosure, the limitation of “intervention objectives” is data representing objectives. This limitation, as drafted, is a simple cognitive process that, under its broadest reasonable interpretation, can be practically covered in the human mind, or by a human using a pen and paper. For example, the claim encompasses a person looking at data collected (i.e. scene) and forming a simple evaluation (i.e. determine intervention objectives). Such evaluations are listed as abstract by MPEP 2106.04(a)(2)(III).
Based on the tests above, the Examiner finds that the additional elements do not integrate the abstract idea into a practical application (step 2A prong two) or provide significantly more (step 2B).
Claim 30 recites the additional elements of wherein the task comprises one or more constraints or characteristics for the task. Further limiting the “task” to include one or more constraints or characteristics for the task represents a mere narrowing of the abstract idea (step 2A prong one) and does not impose meaningful limits on the claim beyond what has already been identified as abstract. Thus, limiting the task to include one or more constraints or characteristics for the task does not further integrate the abstract idea into a practical application (step 2A prong two) or provide significantly more (step 2B).
Claim 31 recites the additional elements of:
generating a pre-execution virtual scene and proposed task sequence for presentation to the user; and
receiving the trust threshold via the user input.
Based on the plain meaning of the terms in light of the Applicant's disclosure, the limitations of “pre-execution virtual scene” and “proposed task sequence” are merely data representative of a scene and sequence of tasks, respectively. The “generating” step, as drafted, is a simple cognitive process that, under its broadest reasonable interpretation, can be practically covered in the human mind, or by a human using a pen and paper. For example, the claim encompasses a person forming a simple evaluation (i.e. generating a pre-execution virtual scene and proposed task sequence). Such evaluations are listed as abstract by MPEP 2106.04(a)(2)(III).
The “receiving” step is recited at a high level of generality (i.e. as a general receiving of the trust threshold via user input) and amounts to mere data gathering, which is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
Based on the tests above, the Examiner finds that the additional elements do not integrate the abstract idea into a practical application (step 2A prong two) or provide significantly more (step 2B).
Claim 32 recites the additional elements of wherein the risk threshold is indicative a level of autonomy defined as one of full manual, augmented control, semi-autonomous, or fully-autonomous. Further limiting the “risk threshold” to be indicative of a level of autonomy defined as one of full manual, augmented control, semi-autonomous, or fully-autonomous represents a mere narrowing of the abstract idea (step 2A prong one) and does not impose meaningful limits on the claim beyond what has already been identified as abstract. Thus, limiting the risk threshold to be indicative of a level of autonomy defined as one of full manual, augmented control, semi-autonomous, or fully-autonomous represents does not further integrate the abstract idea into a practical application (step 2A prong two) or provide significantly more (step 2B).
Claim 33 recites the additional elements of wherein the updates to the knowledge database are generated based on user feedback and assessment of performance of the task. This limitation, as drafted, is a simple cognitive process that, under its broadest reasonable interpretation, can be practically covered in the human mind, or by a human using a pen and paper. For example, the claim encompasses a person looking at data collected (i.e. user feedback and performance of the task) and forming a simple evaluation (i.e. generate updates). Such evaluations are listed as abstract by MPEP 2106.04(a)(2)(III).
No technological details are recited with respect to the “knowledge database” itself. Specifically, when tested per MPEP 2106.05(f)(1), such limitation is interpreted as a result-oriented solution rather than an actual technological improvement. Thus, the knowledge database is found not to integrate the abstract idea into a practical application or provide significantly more.
Therefore, dependent claims 22-33 are not patent eligible under the same rationale as provided for in the rejection of independent claim 21.
101 Analysis of Claim 34
An analysis similar to that of independent claim 21 is made for independent claim 34.
Claim 34. A system comprising:
a memory storing thereon instructions that when executed by a processor of the system, cause the system to perform operations comprising:
receiving data indicative of an area proximate to a robotic device;
based on the data, generating a scene by:
generating a digital twin and identifying objects within the digital twin;
accessing, from a knowledge database, information about the identified objects;
identifying a set of potential scenes based on a context of the objects; and
selecting a likeliest scene of the set of potential scenes; and
accessing the knowledge database having contextual and semantic labels for assisting the robotic device in executing tasks in context of one or more scenes.
101 Analysis - Step 1: Statutory category - Yes
The claim recites a system. The claim falls within one of the four statutory categories. MPEP 2106.03
101 Analysis - Step 2A Prong one evaluation: Judicial Exception - Yes - Mental processes
See the analysis provided for claim 21 above.
The recitation of the additional computer elements (i.e. memory and processor) as performing the claimed operations is recited at a high level of generality and merely uses a computer (i.e. memory and processor) as a tool to perform the processes, which does not preclude the claims from reciting the abstract process when tested per MPEP 2106.04(a)(2)(III)(C)#3.
Thus, the claim recites, describes, or sets forth a mental process.
101 Analysis - Step 2A Prong two evaluation: Practical Application - No
See the analysis provided for claim 21 above.
The elements of the memory and processor merely act in their ordinary capacity for tasks (e.g., to receive, store, or transmit data), and therefore, does not integrate the abstract idea into a practical application or provide significantly more. See MPEP 2106.05(f)(2).
101 Analysis - Step 2B evaluation: Inventive concept - No
See the analysis provided for claim 21 above.
Additionally, the specification does not provide any indication that the memory and processor are anything other than conventional.
101 Analysis of Dependent Claims 35-40
Dependent claims 35-40 do not recite any further limitations that cause the claims to be patent eligible. Rather, the limitations of the dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application.
Specifically, claims 35-40 are similar to claims 24-27 and 31, and therefore, the limitations of claims 35-40 represent a mere narrowing of the abstract idea (step 2A prong one) with no additional computer-based elements integrating the abstract idea into a practical application (step 2A prong two) or provide significantly more (step 2B) using analyses similar to those discussed in the rejections of dependent claims 24-27 and 31 above.
Therefore, dependent claims 35-40 are not patent eligible under the same rationale as provided for in the rejection of independent claim 34.
Claims 21-40 are thus found ineligible under 35 U.S.C. §101 as directed to an abstract idea, with the additional computer-based elements, as tested above, not integrating the abstract idea into a practical application (Step 2A prong two) or providing significantly more (Step 2B).
Key to Interpreting this Office Action
For readability, all claim language has been underlined.
Citations from prior art are provided at the end of each limitation in parentheses.
Any further explanations that were deemed necessary by the Examiner are provided at the end of each claim limitation.
The Applicant is encouraged to contact the Examiner directly if there are any questions or concerns regarding the current Office Action.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 21-40 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-12 and 15-18 of U.S. Patent No. US 12,083,677 B2, hereinafter Mathieu ‘677. Although the claims at issue are not identical, they are not patentably distinct from each other because application claims 21-40 are anticipated by patent claims 1-12 and 15-18.
Patent claim 1 of Mathieu ‘677 recites a computer-implemented method of managing a robotic device using variable autonomous control, comprising:
receiving data indicative of an area proximate to the robotic device;
based on the data, generating a scene by:
generating a digital twin and identifying objects within the digital twin;
accessing, from a knowledge database, information about the identified objects;
identifying a set of potential scenes based on a context of the objects; and
selecting a likeliest scene of the set of potential scenes; and
accessing the knowledge database having contextual and semantic labels for assisting the robotic device in executing tasks in context of one or more scenes.
Therefore, patent claim 1 of Mathieu ‘677 is in essence a “species” of the generic invention of application claim 21. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993).
Patent claim 2 of Mathieu ‘677 recites wherein the knowledge database is generated using machine learning. Therefore, patent claim 2 of Mathieu ‘677 is in essence a “species” of the generic invention of application claim 22. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993).
Patent claim 3 of Mathieu ‘677 recites wherein the task is determined based on a motion planning algorithm, further comprising generating alternative motion plans based on the contextual and semantic labels. Therefore, patent claim 3 of Mathieu ‘677 is in essence a “species” of the generic invention of application claim 23. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993).
Patent claim 1 of Mathieu ‘677 recites the method further comprising:
based on the knowledge database, identifying a task associated with the scene;
dividing the task into sub-tasks; and
determining a risk threshold based on the scene, the sub-tasks, and one or more trust thresholds.
Therefore, patent claim 1 of Mathieu ‘677 is in essence a “species” of the generic invention of application claim 24. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993).
Patent claim 1 recites the method further comprising:
based on the risk threshold, determining a ratio of the sub-tasks to be controlled by a user;
in accordance with the risk threshold, receiving a user input for controlling one or more of the sub-tasks when the ratio dictates that at least one of the sub-tasks requires the user input; and
causing performance of the sub-tasks by the robotic device.
Therefore, patent claim 1 of Mathieu ‘677 is in essence a “species” of the generic invention of application claim 25. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993).
Patent claim 6 recites simulating and evaluating a result of the task from the set of potential tasks. Therefore, patent claim 6 of Mathieu ‘677 is in essence a “species” of the generic invention of application claim 26. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993).
Patent claim 7 recites wherein at least one of: the user input comprises a feedback loop with the user when user input is needed, and the ratio is progressively updated over time based on updates to the knowledge database and the risk threshold. Therefore, patent claim 7 of Mathieu ‘677 is in essence a “species” of the generic invention of application claim 27. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993).
Patent claim 8 recites wherein:
the scene is generated by generating a digital twin and identifying objects within the digital twin; and
the user input is received via a user interface to the digital twin.
Therefore, patent claim 8 of Mathieu ‘677 is in essence a “species” of the generic invention of application claim 28. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993).
Patent claim 9 recites determining one or more intervention objectives usable to determine the task based on the scene. Therefore, patent claim 9 of Mathieu ‘677 is in essence a “species” of the generic invention of application claim 29. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993).
Patent claim 10 recites wherein the task comprises one or more constraints or characteristics for the task. Therefore, patent claim 10 of Mathieu ‘677 is in essence a “species” of the generic invention of application claim 30. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993).
Patent claim 11 recites further comprising:
generating a pre-execution virtual scene and proposed task sequence for presentation to the user; and
receiving the trust threshold via the user input.
Therefore, patent claim 11 of Mathieu ‘677 is in essence a “species” of the generic invention of application claim 31. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993).
Patent claim 12 recites wherein the risk threshold is indicative a level of autonomy defined as one of full manual, augmented control, semi-autonomous, or fully-autonomous. Therefore, patent claim 12 of Mathieu ‘677 is in essence a “species” of the generic invention of application claim 32. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993).
Patent claim 4 recites wherein the updates to the knowledge database are generated based on user feedback and assessment of performance of the task. Therefore, patent claim 4 of Mathieu ‘677 is in essence a “species” of the generic invention of application claim 33. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993).
Patent claim 15 recites a system comprising:
a memory storing thereon instructions that when executed by a processor of the system, cause the system to perform operations comprising:
receiving data indicative of an area proximate to a robotic device;
based on the data, generating a scene by:
generating a digital twin and identifying objects within the digital twin; accessing, from a knowledge database, information about the identified objects;
identifying a set of potential scenes based on a context of the objects; and
selecting a likeliest scene of the set of potential scenes; and
accessing the knowledge database having contextual and semantic labels for assisting the robotic device in executing tasks in context of one or more scenes.
Therefore, patent claim 15 of Mathieu ‘677 is in essence a “species” of the generic invention of application claim 34. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993).
Patent claim 15 recites the system to perform operations further comprising:
based information from the knowledge database, identifying a task associated with the scene;
dividing the task into sub-tasks; and
determining a risk threshold based on the scene, the sub-tasks, and one or more trust thresholds.
Therefore, patent claim 15 of Mathieu ‘677 is in essence a “species” of the generic invention of application claim 35. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993).
Patent claim 15 recites the system to perform operations further comprising:
based on the risk threshold, determining a ratio of the sub-tasks to be controlled by a user;
in accordance with the risk threshold, receiving a user input for controlling one or more of the sub-tasks when the ratio dictates that at least one of the sub-tasks requires the user input; and
causing performance of the sub-tasks by the robotic device.
Therefore, patent claim 15 of Mathieu ‘677 is in essence a “species” of the generic invention of application claim 36. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993).
Patent claim 15 recites wherein the ratio is progressively updated over time based on updates to the knowledge database and the risk threshold. Therefore, patent claim 15 of Mathieu ‘677 is in essence a “species” of the generic invention of application claim 37. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993).
Patent claim 18 recites wherein the task is identified by evaluating and selecting a task from a set of potential tasks, further comprising instructions that when executed by a processor of the system, cause the system to perform operations comprising:
simulating and evaluating a result of the task from the set of potential tasks.
Therefore, patent claim 18 of Mathieu ‘677 is in essence a “species” of the generic invention of application claim 38. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993).
Patent claim 16 recites wherein the user input comprises a feedback loop with the user when user input is needed. Therefore, patent claim 16 of Mathieu ‘677 is in essence a “species” of the generic invention of application claim 39. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993).
Patent claim 17 recites further comprising instructions that when executed by a processor of the system, cause the system to perform operations comprising:
generating a pre-execution virtual scene and proposed task sequence for presentation to the user; and receiving the trust threshold via the user input.
Therefore, patent claim 17 of Mathieu ‘677 is in essence a “species” of the generic invention of application claim 40. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993).
Claim Objections
Claim 36 is objected to because of the following informalities:
In the second line of claim 36, the limitation ends with a period after user and should instead end with semicolon.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 21-40 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
With respect to claims 21 and 34, the term likeliest is a relative term which renders the claim indefinite. The term “likeliest” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Specifically, the limitation of selecting a likeliest scene of the set of potential scenes is unclear, and one of ordinary skill in the art would be unable to determine the “likeliest scene” in light of the overall claim. Claims 22-33 and 35-40 are rejected under 35 U.S.C. 112(b) for incorporating the errors of claims 21 and 34 by dependency.
Claim 23 recites the limitation of the task. There is insufficient antecedent basis for this limitation in the claim. Specifically, independent claim 21 recites a plurality of “tasks,” and the particular task that is determined in claim 23 cannot be reasonably identified by the claim language.
Claim 26 recites the limitation of the set of potential tasks. There is insufficient antecedent basis for these limitations in the claim. Specifically, a “set of potential scenes” is recited in claim 21. The Applicant’s intention is unclear. A “set of potential tasks” cannot be considered an inherent feature of the claimed method.
Claim 27 recites the limitations of the user input, the user, and the ratio. There is insufficient antecedent basis for these limitations in the claim. Specifically, claim 27 depends from claim 24, not claim 25 where “user input,” “user,” and “ratio” are introduced. User input, a user, and a ratio cannot be considered inherent features of the claimed method.
Claim 28 recites the limitation of the user input. There is insufficient antecedent basis for this limitation in the claim. Specifically, claim 28 depends from claim 21, not claim 25 where the “user input” is introduced. User input cannot be considered an inherent feature of the claimed method.
Claim 31 recites the limitations of the trust threshold and the user input. There is insufficient antecedent basis for these limitations in the claim. Specifically, claim 31 depends from claim 21, not claim 25 where the “trust threshold” and “user input” are introduced. A trust threshold and user input cannot be considered inherent features of the claimed method.
Claim 32 recites the limitation of the risk threshold. There is insufficient antecedent basis for this limitation in the claim. Specifically, claim 32 depends from claim 21, not claim 24 where the “risk threshold” is introduced. Risk thresholds cannot be considered an inherent feature of the claimed method.
Claim 33 recites the limitation of the updates. There is insufficient antecedent basis for this limitation in the claim. Specifically, claim 33 depends from claim 21, not claim 27 where “updates” are introduced. Updates cannot be considered an inherent feature of the claimed method.
Claim 37 recites the limitation of the ratio. There is insufficient antecedent basis for these limitations in the claim. Specifically, claim 37 depends from claim 35, not claim 36 where the “ratio” is introduced. A ratio cannot be considered an inherent feature of the claimed method.
Claim 38 recites the limitation of the task. There is insufficient antecedent basis for these limitations in the claim. Specifically, claim 38 depends from claim 34, not claim 35 where a “task” is introduced. Independent claim 34 recites a plurality of “tasks,” and the particular task that is identified in claim 38 cannot be reasonably determined by the claim language.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 21-23, 28-30, 33, and 34 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Taylor et al. (US 2019/0065960 A1), hereinafter Taylor.
Claim 21
Taylor discloses the claimed computer-implemented method of managing a robotic device (i.e. autonomous personal companion 100) using variable autonomous control (see Figure 8, where the matched scenario of step 870 is used to determine a matched algorithm for execution by the personal companion, as described in ¶0146), comprising:
receiving data indicative of an area proximate to the robotic device (see ¶0140, with respect to step 810 of Figure 8, regarding capturing data related to the user and environment in which the user is located by the autonomous personal companion);
based on the data, generating a scene (i.e. scenario) by:
generating a digital twin (i.e. AI model) and identifying objects (i.e. user and environment of the user) within the digital twin (see ¶0142, regarding that the AI model of the user is built based on a plurality of predefined or learned patterns that define the contextual environment of the user, where the personal companion monitors the user and the environment of the user via sensors, as described in ¶0041);
accessing, from a knowledge database, information (i.e. tags) about the identified objects (i.e. user or environment) (see ¶0144, regarding that a plurality of sets of predefined tags are accessed for comparison processes, where the tags classify captured data related to the user and environment within which the user is located, as described in ¶0077, and are stored in local memory 304 or at back-end server 140, as described in ¶0119, with respect to Figure 7A);
identifying a set of potential scenes based on a context of the objects (see ¶0144-0145, with respect to step 840 of Figure 8, regarding weighting sets of predefined tags, where the weight corresponds to a matching quality between the collected set of tags and a corresponding set of predefined tags, where each scenario is defined by a set of predefined learned patterns corresponding to a set of predefined tags); and
selecting a likeliest scene (i.e. matched scenario) of the set of potential scenes (see ¶0145, with respect to step 870 of Figure 8, regarding that a matched scenario is selected for the collected set of tags that is associated with a matched set of predefined tags having a corresponding weight with the highest match quality);
accessing the knowledge database having contextual and semantic labels for assisting the robotic device in executing tasks (i.e. matched algorithm) in context of one or more scenes (see ¶0146-0148, regarding that a matched algorithm of the matched scenario of a plurality of scenarios is executed by the personal companion that includes actions that involving moving the autonomous personal companion, where predefined tags and patterns are accessed to determine the matched scenario, as described in ¶0144-0145, where the plurality of scenarios and their associated predefined learned patterns 706 and predefined tags 711 are stored in local memory 304 or at the back-end server 140, as described in ¶0119, with respect to Figure 7A). The predefined tags represent “semantic labels,” in that they classify specific patterns of behaviors of the user, e.g., “time of day is 7pm,” “user is returning from work,” or “user is sitting on couch” (see ¶0146). The scenarios represent “contextual labels,” in that they provide contextualization of user behavior (see ¶0120).
The AI model of Taylor may be reasonably interpreted as a “digital twin,” in light of the common definition of the term “digital twin.” Specifically, a digital twin is known in the art as a dynamic, virtual replica of a physical object or system that uses real-time data to mirror its real-world counterpart’s behavior, performance, and conditions. In this case, the AI model of Taylor models a user’s behaviors, biometrics, and needs with respect to context based on collected data of the user and environment (see ¶0040-0041), so as to predict how the user will act or what they will need without explicit instructions (see ¶0033, ¶0045, ¶0120). The AI model of Taylor is continually updated as the user and environment are monitored (see ¶0126).
Claim 22
Taylor further discloses that the knowledge database is generated using machine learning (see ¶0126, regarding that the predefined learned patterns are built through the deep learning engine, where each of the predefined learned patterns are associated with predefined tags, as described in ¶0127-0128).
Claim 23
See the rejection of claim 23 under 35 U.S.C. 112(b) regarding issues with this limitation.
Taylor further discloses that the task is determined based on a motion planning algorithm (see ¶0148, regarding that at least one of the actions involves moving the autonomous personal companion, where the actions are generated based on the matched algorithm of the matched scenario, as described in ¶0146), further comprising generating alternative motion plans based on the contextual and semantic labels (see ¶0146-0148, regarding that a matched algorithm of the matched scenario is executed by the personal companion that includes actions that involving moving the autonomous personal companion, where predefined tags and patterns are accessed to determine the matched scenario, as described in ¶0144-0145; ¶0148, regarding different types of “motions plans” depending on the matched algorithm).
Claim 28
Taylor further discloses that the scene is generated by generating a digital twin and identifying objects within the digital twin (see ¶0124, with respect to Figure 7B, regarding that the scenarios and scenario algorithms define the local AI model 120 of the user; ¶0142, regarding that the AI model of the user is built based on a plurality of predefined or learned patterns that define the contextual environment of the user, where the personal companion monitors the user and the environment of the user via sensors, as described in ¶0041); and the user input is received via a user interface to the digital twin (see ¶0126, regarding user input data 701 is input into autonomous personal companion 100, depicted as local AI model 120 in Figure 7B).
Claim 29
Taylor further discloses determining one or more intervention objectives usable to determine the task based on the scene (see ¶0140-0146, with respect to Figure 8, regarding the comparison of the collected set of tags to each of a plurality of sets of predefined tags associated with a plurality of scenarios to select a matching scenario associated with a matching algorithm, where each of the sets of predefined tags is assigned a weight). The limitation of “intervention objectives” is not clearly defined in the claim, and therefore, any of the operations or parameters used to determine the particular action associated with the matched algorithm of Taylor may be reasonably applied as an “intervention objective.”
Claim 30
Taylor further discloses that the task comprises one or more constraints or characteristics for the task (see ¶0146-0148, regarding embodiments of different actions associated with a matched algorithm that may be implemented by the personal companion, e.g., broadcast relaxing digital content, positioning closer to the user, moving with the user, etc.).
Claim 33
See the rejection of claim 33 under 35 U.S.C. 112(b) regarding issues with this limitation.
Taylor further discloses that the updates to the knowledge database are generated based on user feedback and assessment of performance of the task (see ¶0126, regarding that the scenarios are continually updated based on changes in user input data, which includes monitoring the user and the environment, as described in ¶0041).
Claim 34
Taylor discloses the claimed system (see Figure 3B) comprising a memory (i.e. memory 304) storing thereon instructions that when executed by a processor (i.e. CPU 302) of the system, cause the system to perform operations (see ¶0075-0080) discussed in the rejection of claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 24-27, 32, and 35-39 are rejected under 35 U.S.C. 103 as being unpatentable over Taylor in view of Baek et al. (US 2021/0072759 A1), hereinafter Baek.
Claims 24 and 35
While Taylor further discloses based on the knowledge database, identifying a task associated with the scene (see ¶0144-0146, regarding identifying a matched algorithm of the matched scenario based on the predefined tags associated with the plurality of scenarios that correspond to predefined patterns of behavior), Taylor does not further disclose dividing the task into sub-tasks, and determining a risk threshold based on the scene, the sub-tasks, and one or more trust thresholds. However, the division of the task and determination of a risk threshold does not influence the claimed operations, and therefore, it would be reasonable to modify Taylor to incorporate these additional steps, in light of Baek.
Specifically, Baek teaches the known technique of dividing a task (similar to the task taught by Taylor) into sub-tasks (see ¶0103-0105, with respect to S20 of Figure 3, regarding that a plurality of subtasks are generated according to a plurality of route sections from the current position to the destination), and determining a risk threshold (i.e. difficulty level, defined as being a level 1 to N in ¶0121) based on captured information (similar to the scene taught by Taylor), the sub-tasks, and one or more trust thresholds (see ¶0111-0120, with respect to step S30 of Figure 3, regarding that the difficulty level of the subtask is determined by comparing the driving difficulty level of the subtask with a reference difficulty level, defined as a threshold determined based on the driving capability of the robot 100 in ¶0120, where the driving difficulty level is determined from the congestion level, as described in ¶0118; ¶0123, regarding that the difficulty level of the plurality of subtasks is determined before the start of the driving). A “trust threshold” may be reasonably interpreted as the reference difficulty level of Baek, given that the capabilities of robot 100 inherently provide a degree of “trust.” For example, one would not trust robot 100 to navigate an area with detected obstacles (see ¶0114) when robot 100 does not have the driving capability to perform obstacle avoidance (see ¶0118). A “trust threshold” has not been defined in the claim, and therefore, prior art may be applied liberally to this claimed feature.
In Taylor, the robot is directed to perform personalized companionship and assistance that include driving operations. In Baek, the robot is directed to performing driving tasks. However, it the techniques of dividing a robot task into sub-tasks and determining a risk threshold based on the scene, sub-tasks, and trust thresholds that are modified by Baek; therefore, the particular types of tasks that the robot is capable of performing does not influence this combination.
Since the systems of Taylor and Baek are directed to the same purpose, i.e. providing autonomous robots that move within their environments to perform tasks, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Taylor to further perform dividing the task into sub-tasks, and determining a risk threshold based on the scene, the sub-tasks, and one or more trust thresholds, in light of Baek, with the predictable result of providing a robot capable of completing a given task even when autonomous driving becomes difficult (¶0009 of Baek), such that high difficulty sub-tasks can be identified while maintaining autonomy in low difficulty sub-tasks (¶0164-0165 of Baek).
Claims 25 and 36
Baek further discloses based on the risk threshold, determining a ratio of the sub-tasks to be controlled by a user (see ¶0135-0136, with respect to step S40 of Figure 3, regarding determining whether to recruit applicants for the subtask based on the difficulty level determined in step S30; ¶0166-0171, with respect to Figure 5, regarding that the robot 100 autonomously drives during the subtask, unless the driving difficulty is determined to be greater to a reference value, e.g., high; Figure 4, depicting an example of a task comprising five subtasks, where subtasks 3-5 have been determined to have a high difficulty level). A “ratio” of subtasks is reasonably reflected by the number of subtasks with a driving difficulty level that exceeds the reference difficulty level, e.g., the “ratio” in the example provided in Figure 4 of Baek is taught by three of the subtasks (i.e. subtasks 3-5) out of the five total subtasks associated with the task.
Baek further discloses in accordance with the risk threshold, receiving a user input for controlling one or more of the sub-tasks when the ratio dictates that at least one of the sub-tasks requires the user input (see ¶0143-0144, regarding that the selected operator is granted operation right for remote control of robot 100); and causing performance of the sub-tasks by robot 100 (similar to the robotic device taught by Taylor) (see ¶0166-0181, with respect to Figure 5, regarding the control of the robot 100 through all subtasks of the task, depending on the difficulty level).
Claims 26 and 38
See the rejections of claims 26 and 38 under 35 U.S.C. 112(b) regarding issues with this limitation.
Baek further teaches the technique of simulating and evaluating a result of the task from the set of potential tasks (see ¶0198-0205, with respect to Figure 8, regarding the adjustment of the operator’s relatability in response to performing a subtask).
Since the systems of Taylor and Baek are directed to the same purpose, i.e. providing autonomous robots that move within their environments to perform tasks, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Taylor to further perform simulating and evaluating a result of the task from the set of potential tasks, in light of Baek, with the predictable result of evaluating the reliability of operators (¶0013 of Baek), so as to provide a robot capable of completing a given task even when autonomous driving becomes difficult (¶0009 of Baek).
Claims 27, 37, and 39
Prior art is applied liberally due to the issues discussed in the rejections of claims 27 and 37 under 35 U.S.C. 112(b).
Taylor further discloses that the user input comprises a feedback loop with the user when user input is needed (see ¶0049, regarding that predicted results are compared to predetermined and true results obtained from previous interactions and monitoring of the user and environment in order to refine or modify the parameters used by the deep learning engine 190, where a cost function is used to measure the deviation between the prediction and expected result, as described in ¶0052; ¶0126, regarding that captured data is continually input). Taylor does not further disclose that the ratio is progressively updated over time based on updates to the knowledge database and the risk threshold. However, this feature does not influence the claimed operation; therefore, it would be obvious to progressively update a ratio over time based on updates to a database and a risk threshold, in light of Baek.
Specifically, Baek teaches the known technique in which the ratio is progressively updated over time based on updates to the knowledge database and the risk threshold (see ¶0198-0205, with respect to Figure 8, regarding the operator’s reliability is updated based on the operator’s performance of a subtask, where the operator selected in step S40 of Figure 4 is based on the reliability of the operator, as described in ¶0141).
Since the systems of Taylor and Baek are directed to the same purpose, i.e. providing autonomous robots that move within their environments to perform tasks, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Taylor, such that the ratio is progressively updated over time based on updates to the knowledge database and the risk threshold, in light of Baek, with the predictable result of updating the reliability of operators (¶0013 of Baek), so as to provide a robot capable of completing a given task even when autonomous driving becomes difficult (¶0009 of Baek).
Claim 32
Taylor does not further disclose that the risk threshold is indicative a level of autonomy defined as one of full manual, augmented control, semi-autonomous, or fully-autonomous. However, the risk threshold is not recited in claim 21 from which claim 32 depends; therefore, including a “risk threshold” would be obvious in light of Baek. See the rejection of claim 32 under 35 U.S.C. 112(b) regarding issues with this limitation.
Specifically, Baek discloses that the risk threshold is indicative a level of autonomy defined as one of full manual, augmented control, semi-autonomous, or fully-autonomous (see ¶0166-0182, with respect to Figure 5, regarding that the robot 100 performs autonomous driving unless the driving difficulty of the subtask is “high”). In this case, the “risk threshold” is indicative of a level of autonomy defined as one of full manual or fully-autonomous, depending on the application of the difficulty level of Baek. Due to the claim language, only one of the “level of autonomy” is required to be taught by prior art.
Since the systems of Taylor and Baek are directed to the same purpose, i.e. providing autonomous robots that move within their environments to perform tasks, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Taylor, such that the risk threshold is indicative a level of autonomy defined as one of full manual, augmented control, semi-autonomous, or fully-autonomous, in light of Baek, with the predictable result of providing a robot capable of completing a given task even when autonomous driving becomes difficult (¶0009 of Baek), such that high difficulty sub-tasks can be identified while maintaining autonomy in low difficulty sub-tasks (¶0164-0165 of Baek).
Claims 31 and 40 are rejected under 35 U.S.C. 103 as being unpatentable over Taylor in view of Baek and Amacker et al. (US 2019/0096134 A1), hereinafter Amacker.
Claims 31 and 40
Taylor does not further disclose generating a pre-execution virtual scene and proposed task sequence for presentation to the user, and receiving the trust threshold via the user input. However, these features do not influence the claimed operations; therefore, it would be obvious to incorporate these additional steps in Taylor, in light of the combination of Baek and Amacker.
Specifically, Beak teaches the known technique of generating a proposed task sequence for presentation to the user (see ¶0149-0151, regarding that the robot transmits its current state information to the operator, where the current state information includes subtask information to be performed). Baek further teaches the “trust threshold” as a fixed value or dependent on the driving capability of the robot (see ¶0120) and does not further disclose receiving the trust threshold via the user input. However, manual input of a threshold is well known in the art, and given that the system is provided with a user input (see ¶0088), it would be capable of instant and unquestionable demonstration to modify the “trust threshold” to be received via the user input. Further, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the trust threshold of Baek to be received via the user input, with the predictable result of providing manual input of the robot’s capabilities.
While Baek further teaches the generation of a surrounding image captured from sensors on-board the robot for presentation to the user (see ¶0151), Baek does not further teach generating a pre-execution virtual scene for presentation to the user. However, this limitation does not influence the preceding limitations, and therefore, it would be reasonable to combine prior art to teach this claimed feature.
Specifically, Amacker teaches the known technique of generating a pre-execution virtual scene for presentation to the user (see abstract).
Since the systems of Taylor, Baek, and Amacker are directed to the same purpose, i.e. control of a robot, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Taylor to further perform the steps of generating a pre-execution virtual scene and proposed task sequence for presentation to the user, and receiving the trust threshold via the user input, in light of Baeker and Amacker, with the predictable result of presenting dynamic augmented reality views from autonomous robots to users (¶0002-0003 of Amacker) and providing transparency of the current state of the robot for operator control, thus enabling semi-autonomous driving (¶0153 of Baeker).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Specifically, Dey et al. (US 2020/0282561 A1) teaches the use of a distributed semantic knowledge base to gather task-specific data (see abstract), Lynen et al. (US 10,339,708 B2) teaches partitioning object data into corresponding scene files (see col. 10, lines 6-38), and Parker (US 2015/0314440 A1) teaches generating a virtual world that includes virtual objects, such that a user may interact with the virtual world to manipulate the virtual world object and generate a modified virtual world (see ¶0034).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sara J Lewandroski whose telephone number is (571)270-7766. The examiner can normally be reached Monday-Friday, 9 am-5 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramya P Burgess can be reached at (571)272-6011. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SARA J LEWANDROSKI/Examiner, Art Unit 3661