DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 05/08/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 21 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
101 Analysis – Step 1
Claim 21 is directed to a system that communicates with a robot (i.e., a process). Therefore, claim 21 is within at least one of the four statutory categories.
101 Analysis – Step 2A, Prong I
Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes.
Independent claim 21 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. Claim 21 recites:
A computing system comprising:
a communication interface configured to communicate with a robot and with a camera; and at least one processing circuit configured, when an object is or has been in a field of view of the
camera, to:
receive image information, generated by the camera, representing the object;
identify one or more object recognition templates corresponding to an object or an object
type;
select a primary object template from among the one or more object recognition templates
based on matching the image information with the one or more object recognition templates;
generate a primary candidate region based on the primary object template;
determine at least one of:
(i) a subset of one or more remaining matching object recognition templates, or
(ii) an unmatched region of the image information that is adjacent to the primary candidate
region;
generate a safety volume list in response to a determination of the subset or the unmatched
region,
wherein the safety volume list includes at least one of:
(i) the unmatched region, or
(ii) one or more additional candidate regions based on the subset, wherein the one or more additional candidate regions estimate object boundary locations for the object or estimate locations in the field of view that are occupied by the object; and
perform motion planning for interaction between the robot and the object based on the primary candidate region and the safety volume list.
The examiner submits that the foregoing bolded limitation(s) constitute mental processes. For example, “identify…”, “select…”, “generate…”, and “determine…” in the context of this claim encompasses performing observation or judgment to obtain certain results which can be used to control the robot. Accordingly, the claim recites at least one abstract idea.
101 Analysis – Step 2A, Prong II
Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”):
A method for autonomous solar installation, the method comprising:
A computing system comprising:
a communication interface configured to communicate with a robot and with a camera; and
at least one processing circuit configured, when an object is or has been in a field of view of the
camera, to:
receive image information, generated by the camera, representing the object;
identify one or more object recognition templates corresponding to an object or an object
type;
select a primary object template from among the one or more object recognition templates
based on matching the image information with the one or more object recognition templates;
generate a primary candidate region based on the primary object template;
determine at least one of:
(i) a subset of one or more remaining matching object recognition templates, or
(ii) an unmatched region of the image information that is adjacent to the primary candidate
region;
generate a safety volume list in response to a determination of the subset or the unmatched
region,
wherein the safety volume list includes at least one of:
(i) the unmatched region, or
(ii) one or more additional candidate regions based on the subset, wherein the one or more additional candidate regions estimate object boundary locations for the object or estimate locations in the field of view that are occupied by the object; and
perform motion planning for interaction between the robot and the object based on the primary candidate region and the safety volume list.
For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application.
Regarding the additional limitations of “a communication interface configured to communicate …”, “at least one processing circuit…”, and “receive image information …”, the examiner submits that these limitations are insignificant extra-solution activities which does not integrate the abstract idea into a practical application. In general, the obtaining an image of an in-progress solar installation is recited at a high level of generality and amounts to mere data gathering, which is a form of insignificant extra-solution activity. Lastly, the “perform motion planning …” merely describes how to generally “apply” the otherwise mental processes in a generic or general purpose robot control environment. The robot control system is recited at a high level of generality.
Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Analysis – Step 2B
Regarding Step 2B of the 2019 PEG, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application.
Further, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B to determine if they are more than what is well-understood, routine, conventional activity in the field. The additional limitations of “obtaining an image of an in-progress solar installation …” are well-understood, routine, and conventional activities and the specification does not provide any indication that controlling the robot is anything other than a conventional computer within a robot. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner.
Dependent claim(s) 22-37 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claims 22-37 are not patent eligible under the same rationale as provided for in the rejection of [independent claim].
Claim 38 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
101 Analysis – Step 1
Claim 38 is directed to a non-transitory computer-readable medium that when executed by at least one processing circuit of a computing system, causes the at least one processing circuit to communicates with a robot (i.e., a manufacture). Therefore, claim 38 is within at least one of the four statutory categories.
101 Analysis – Step 2A, Prong I
Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes.
Independent claim 38 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. Claim 38 recites:
A non-transitory computer-readable medium having instructions that, when executed by at least one processing circuit of a computing system, causes the at least one processing circuit to:
a communication interface configured to communicate with a robot and with a camera; and at least one processing circuit configured, when an object is or has been in a field of view of the
camera, to:
receive image information, generated by a camera, representing an object in a field of view of the camera, wherein the computing system is configured to communicate with: (i) a robot, and (ii) the camera;
identify one or more object recognition templates corresponding to an object or an object
type;
select, a primary object template from among the one or more object recognition templates
based on matching the image information with the one or more object recognition templates ;
generate a primary candidate region based on the primary object template;
determine at least one of:
(i) a subset of one or more remaining matching object recognition templates, or
(ii) an unmatched region of the image information that is adjacent to the primary candidate region;
generate a safety volume list in response to a determination of the subset, or the unmatched region,
wherein the safety volume list includes at least one of:
(i) the unmatched region, or
(ii) one or more additional candidate regions based on the subset, wherein the one or more additional candidate regions estimate object boundary locations for the object or estimate locations in the field of view that are occupied by the object; and
perform motion planning for interaction between the robot and the object based on the primary candidate region and the safety volume list.
The examiner submits that the foregoing bolded limitation(s) constitute mental processes. For example, “identify…”, “select…”, “generate…”, and “determine…” in the context of this claim encompasses performing observation or judgment to obtain certain results which can be used to control the robot. Accordingly, the claim recites at least one abstract idea.
101 Analysis – Step 2A, Prong II
Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”):
A non-transitory computer-readable medium having instructions that, when executed by at least one processing circuit of a computing system, causes the at least one processing circuit to:
a communication interface configured to communicate with a robot and with a camera; and at least one processing circuit configured, when an object is or has been in a field of view of the
camera, to:
receive image information, generated by a camera, representing an object in a field of view of the camera, wherein the computing system is configured to communicate with: (i) a robot, and (ii) the camera;
identify one or more object recognition templates corresponding to an object or an object
type;
select, a primary object template from among the one or more object recognition templates
based on matching the image information with the one or more object recognition templates ;
generate a primary candidate region based on the primary object template;
determine at least one of:
(i) a subset of one or more remaining matching object recognition templates, or
(ii) an unmatched region of the image information that is adjacent to the primary candidate region;
generate a safety volume list in response to a determination of the subset, or the unmatched region,
wherein the safety volume list includes at least one of:
(i) the unmatched region, or
(ii) one or more additional candidate regions based on the subset, wherein the one or more additional candidate regions estimate object boundary locations for the object or estimate locations in the field of view that are occupied by the object; and
perform motion planning for interaction between the robot and the object based on the primary candidate region and the safety volume list.
For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application.
Regarding the additional limitations of “a communication interface configured to communicate …”, “at least one processing circuit…”, and “receive image information …”, the examiner submits that these limitations are insignificant extra-solution activities which does not integrate the abstract idea into a practical application. In general, the obtaining an image of an in-progress solar installation is recited at a high level of generality and amounts to mere data gathering, which is a form of insignificant extra-solution activity. Lastly, the “perform motion planning …” merely describes how to generally “apply” the otherwise mental processes in a generic or general purpose robot control environment. The robot control system is recited at a high level of generality.
Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Analysis – Step 2B
Regarding Step 2B of the 2019 PEG, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application.
Further, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B to determine if they are more than what is well-understood, routine, conventional activity in the field. The additional limitations of “obtaining an image of an in-progress solar installation …” are well-understood, routine, and conventional activities and the specification does not provide any indication that controlling the robot is anything other than a conventional computer within a robot. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner.
Dependent claim 39 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claim 39 is not patent eligible under the same rationale as provided for in the rejection of [independent claim].
Claim 40 is rejected under 35 U.S.C. 101 under similar rationale as claim 21 and 38.
Therefore, Claim(s) 21-40 are ineligible under 35 USC §101.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claim 1 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. US 11900652 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because selecting a primary object template from among the one or more object recognition templates based on matching the image information with the one or more object recognition templates is a broader limitation of selecting as a primary detection hypothesis, a detection hypothesis from among the set of one or more detection hypotheses, wherein the primary detection hypothesis is associated with a matching object recognition template of the set of one or more matching object recognition templates, wherein the detection hypothesis that is selected as the primary detection hypothesis has a confidence value which is highest among a set of one or more respective confidence values, wherein the set of one or more respective confidence values are associated with the set of one or more detection hypotheses, and indicate respective degrees by which the image information matches the set of one or more matching object recognition templates associated with the set of one or more detection hypotheses. And removing inherent and/or unnecessary limitation(s)/step(s) or adding an element and its function would be within the level of one of ordinary skill in the art. It is well settled that the adding or deleting of an element and its function(s) in the claim of the present application are an obvious expedient if the remaining elements perform the same function as before. In re Karlson, 136 USPQ 184 (CCPA 1963). Also note Ex parte Rainu, 168 USPQ 375 (Bd. App. 1969). Omission of a referenced element or step whose function is not needed would be obvious to one of ordinary skill in the art. Examiner further notes wherein although the claims are not identical (slightly broader), they are commensurate in scope to the claim limitations provided in the issued U.S. Patent, and likewise would anticipate the currently provided claim limitations.
Instant Application No. 18437946
U.S. Patent No. US 11900652 B2
21. A computing system comprising: a communication interface configured to communicate with a robot and with a camera; and at least one processing circuit configured, when an object is or has been in a field of view of the camera, to: receive image information, generated by the camera, representing the object; identify one or more object recognition templates corresponding to an object or an object type;
1. A computing system comprising: a communication interface configured to communicate with a robot and with a camera having a camera field of view ; and at least one processing circuit configured, when an object is or has been in the camera field of view, to: receive image information representing the object, wherein the image information is generated by the camera; identify a set of one or more matching object recognition templates, which are one or more object recognition templates that satisfy a predefined template matching condition when compared against the image information, wherein the set of one or more matching object recognition templates are associated with a set of one or more detection hypotheses, which are one or more respective estimates on which object or object type is represented by the image information;
select a primary object template from among the one or more object recognition templates based on matching the image information with the one or more object recognition templates;
select, as a primary detection hypothesis, a detection hypothesis from among the set of one or more detection hypotheses, wherein the primary detection hypothesis is associated with a matching object recognition template of the set of one or more matching object recognition templates, wherein the detection hypothesis that is selected as the primary detection hypothesis has a confidence value which is highest among a set of one or more respective confidence values, wherein the set of one or more respective confidence values are associated with the set of one or more detection hypotheses, and indicate respective degrees by which the image information matches the set of one or more matching object recognition templates associated with the set of one or more detection hypotheses;
generate a primary candidate region based on the primary object template; determine at least one of: (i) a subset of one or more remaining matching object recognition templates, or (ii) an unmatched region of the image information that is adjacent to the primary candidate region;
generate, as a primary candidate region, a candidate region which estimates object boundary locations for the object or estimates which locations in the camera field of view are occupied by the object, wherein the primary candidate region is generated based on the matching object recognition template associated with the primary detection hypothesis; determine, in addition to the matching object recognition template associated with the primary detection hypothesis, determine at least one of:(i) whether the set of one or more matching object recognition templates has, in addition to the matching object recognition template associated with the primary detection hypothesis, a subset of one or more remaining matching object recognition templates that also satisfy the predefined template matching condition when compared against the image information, or (ii) whether the image information has a portion representing an unmatched region which is adjacent to the primary candidate region and which fails to satisfy the predefined template matching condition;
generate a safety volume list in response to a determination of the subset or the unmatched region, wherein the safety volume list includes at least one of: (i) the unmatched region, or (ii) one or more additional candidate regions based on the subset, wherein the one or more additional candidate regions estimate object boundary locations for the object or estimate locations in the field of view that are occupied by the object;
generate a safety volume list in response to a determination that there is the subset of one or more remaining matching object recognition templates, or that the image information has the portion representing the unmatched region, generate a wherein the safety volume list, which is a list that describes at least one of:(i) the unmatched region, or (ii) one or more additional candidate regions that also estimate object boundary locations for the object or estimate which locations are occupied by the object, wherein the one or more additional candidate regions are generated based on the subset of one or more remaining matching object recognition templates;
and perform motion planning for interaction between the robot and the object based on the primary candidate region and the safety volume list.
and perform motion planning based on the primary candidate region and based on the safety volume list, wherein the motion planning is for robot interaction between the robot and the object for gripping or picking up the object and moving the object from the occupied location of the object to a destination location.
22. The computing system of claim 21, wherein the at least one processing circuit is further configured to determine a bounding region encompassing the primary candidate region and at least one of: (i) the one or more additional candidate regions or (ii) the unmatched region, and to perform the motion planning for a trajectory associated with a robot end effector apparatus of the robot based on the bounding region.
2. The computing system of claim 1, wherein the at least one processing circuit is configured to determine a bounding region which encompasses the primary candidate region and at least one of: (i) the one or more additional candidate regions or (ii) the unmatched region, wherein performing the motion planning includes determining a trajectory associated with a robot end effector apparatus based on the bounding region.
23. The computing system of claim 22, wherein the at least one processing circuit is further configured to perform the motion planning including determining robot gripping motion based on the primary candidate region.
3. The computing system of claim 2, wherein performing the motion planning includes determining robot gripping motion based on the primary candidate region, and determining the trajectory based on the bounding region.
24. The computing system of claim 21, wherein for the subset of one or more remaining matching object recognition templates, the at least one processing circuit is further configured to: determine whether a respective confidence value associated with each of the subset of one or more remaining matching object recognition templates is within a confidence similarity threshold relative to a confidence value associated with the primary object template, include, in the safety volume list, a respective candidate region associated with the each of the subset of one or more remaining matching object recognition templates, in response to a determination that the respective confidence value is within the confidence similarity threshold, and supplement the one or more additional regions of the safety volume list with the respective candidate region.
4. The computing system of claim 1, wherein the set of one or more detection hypotheses include, in addition to the primary detection hypothesis, a subset of one or more remaining detection hypotheses which are associated with the subset of one or more remaining matching object recognition templates, wherein the at least one processing circuit is configured, for each detection hypothesis of the subset of one or more remaining detection hypotheses, to: determine whether a respective confidence value associated with the detection hypothesis is within a predefined confidence similarity threshold relative to the confidence value associated with the primary detection hypothesis, wherein the at least one processing circuit is configured to include, in the safety volume list, a respective candidate region associated with the detection hypothesis in response to a determination that the respective confidence value associated with the detection hypothesis is within the predefined confidence similarity threshold relative to the confidence value associated with the primary detection hypothesis, such that the respective candidate region is part of the one or more additional regions of the safety volume list.
25. The computing system of claim 24, wherein each candidate region of the one or more additional candidate regions in the safety volume list is associated with a confidence value that is within the confidence similarity threshold.
5. The computing system of claim 4, wherein each candidate region of the one or more additional candidate regions in the safety volume list is associated with a respective detection hypothesis which has a confidence value that is within the predefined confidence similarity threshold relative to the confidence value associated with the primary detection hypothesis.
26. The computing system of claim 24, wherein each candidate region of the one or more additional candidate regions in the safety volume list is associated with a confidence value that is greater than or equal to a template matching threshold
6. The computing system of claim 4, wherein each candidate region of the one or more additional candidate regions in the safety volume list is associated with a respective detection hypothesis which has a confidence value that is greater than or equal to a predefined template matching threshold.
27. The computing system of claim 21, wherein the subset of one or more remaining matching object recognition templates includes a plurality of matching object recognition templates associated with a plurality of respective candidate regions, wherein the at least one processing circuit is further configured, for each candidate region of the plurality of candidate regions, to: determine a respective amount of overlap between the candidate region and the primary candidate region; determine whether the respective amount of overlap is equal to or exceeds an overlap threshold; and include the candidate region in the one or more additional candidate regions of the safety volume list, in response to the respective amount of overlap being equal to or exceeding the overlap threshold.
7. The computing system of claim 1, wherein the subset of one or more remaining matching object recognition templates include a plurality of matching object recognition templates associated with a plurality of respective candidate regions, wherein the at least one processing circuit is configured, for each candidate region of the plurality of candidate regions, to: determine a respective amount of overlap between the candidate region and the primary candidate region; determine whether the respective amount of overlap is equal to or exceeds a predefined overlap threshold, wherein the at least one processing circuit is configured to include the candidate region in the safety volume list in response to a determination that the amount of overlap is equal to or exceeds the predefined overlap threshold, such that the candidate region is part of the one or more additional candidate regions of the safety volume list.
28. The computing system of claim 21, wherein the image information includes 2D image information, and wherein the primary object template comprises a set of visual description information which is determined by the at least one processing circuit to satisfy a template matching condition when compared against the 2D image information.
8. The computing system of claim 1, wherein the image information includes 2D image information, and wherein the matching object recognition template associated with the primary detection hypothesis includes a set of visual description information which is determined by the at least one processing circuit to satisfy the predefined template matching condition when compared against the 2D image information.
29. The computing system of claim 28, wherein at least one object recognition template of the subset of one or more remaining object recognition templates has a set of visual description information that is determined by the at least one processing circuit to satisfy the template matching condition when compared against the 2D image information, and wherein the at least one processing circuit is further configured to generate the safety volume list based on at least one of the unmatched region, the one or more additional candidate regions, or the at least one object recognition template.
9. The computing system of claim 8, wherein at least one matching object recognition template of the subset of one or more remaining matching object recognition templates has a respective set of visual description information that is also determined by the at least one processing circuit to satisfy the predefined template matching condition when compared against the 2D image information, and wherein the at least one processing circuit is configured to generate the safety volume list based on the at least one matching object recognition template.
30. The computing system of claim 29, wherein the primary object template includes a respective set of structure description information that indicates a first object size, and wherein the at least one object recognition template includes a respective set of structure description information that indicates a second object size different than the first object size
10. The computing system of claim 9, wherein the matching object recognition template associated with the primary detection hypothesis includes a respective set of structure description information that indicates a first object size, and wherein the at least one matching object recognition template includes a respective set of structure description information that indicates a second object size different than the first object size.
31. The computing system of claim 28, wherein the image information further includes 3D image information, and wherein at least one object recognition template of the subset of one or more remaining object recognition templates has a respective set of structure description information that is determined by the at least one processing circuit to satisfy the template matching condition when compared against the 3D image information, and wherein the at least one processing circuit is further configured to generate the safety volume list based on at least one of the unmatched region, the one or more additional candidate regions, or the at least one object recognition template.
11. The computing system of claim 8, wherein the image information further includes 3D image information, and wherein at least one object recognition template of the subset of one or more remaining matching object recognition templates has a respective set of structure description information that is determined by the at least one processing circuit to satisfy the predefined template matching condition when compared against the 3D image information, and wherein the at least one processing circuit is configured to generate the safety volume list based on the at least one object recognition template.
32. The computing system of claim 28, wherein the at least one processing circuit is further configured, when the one or more object recognition templates are part of a plurality of object recognition templates stored in a template storage space, to: determine whether the plurality of object recognition templates has, in addition to the primary object template, at least one object recognition template that satisfies a template similarity condition when compared against the primary object template; and in response to a determination that the at least one object recognition template satisfies the template similarity condition, generate the safety volume list based on at least one of the unmatched region, the one or more additional candidate regions, or the at least one object recognition template.
12. The computing system of claim 8, wherein the matching object recognition template associated with the primary detection hypothesis is a first matching object recognition template among the set of one or more matching object recognition templates, wherein the at least one processing circuit is configured, when the set of one or more matching object recognition templates are part of a plurality of object recognition templates stored in a template storage space, to: determine whether the plurality of object recognition templates has, in addition to the first matching object recognition template, at least one object recognition template which satisfies a predefined template similarity condition when compared against the first matching object recognition template; and in response to a determination that the plurality of object recognition templates includes the at least one object recognition template which satisfies the predefined template similarity condition when compared against the first matching object recognition template, generate the safety volume list based on the at least one object recognition template.
33. The computing system of claim 21, wherein the primary candidate region represents a first manner of aligning the image information with the primary object template, and wherein the at least one processing circuit is further configured to include in the safety volume list another candidate region which represents a second manner of aligning the image information with the primary object template.
13. The computing system of claim 1, wherein the primary candidate region represents a first manner of aligning the image information with the matching object recognition template associated with the primary detection hypothesis, and wherein the at least one processing circuit is configured to include in the safety volume list another candidate region which represents a second manner of aligning the image information with the matching object recognition template.
34. The computing system of claim 21, wherein the at least one processing circuit is further configured to: identify a first set of image corners or a first set of image edges represented by the image information; identify a first image region located between the first set of image corners or the first set of image edges, wherein the primary object template is determined by the at least one processing circuit to satisfy a template matching condition when compared against the first image region, the primary object template being a first object recognition template among the one or more object recognition templates; identify, based on the image information, a second set of image corners or a second set of image edges, wherein the second set of image corners include at least one image corner which is part of the first set of image corners and include at least one image corner which is outside of the first image region, and wherein the second set of image edges include at least one image edge which is part of the first set of image edges and include at least one image edge which is outside the first image region; and identify a second image region located between the second set of image corners or the second set of image edges, wherein the second image region extends beyond the first image region, and wherein the one or more object recognition templates includes a second matching object recognition template, which is determined by the at least one processing circuit to satisfy the template matching condition when compared against the second image region, wherein the at least one processing circuit is further configured to generate the primary candidate region based on the first object recognition template, and to generate at least one candidate region in the safety volume list based on the second matching object recognition template.
14. The computing system of claim 1, wherein the at least one processing circuit is configured to: identify a first set of image corners or a first set of image edges represented by the image information; identify a first image region, which is an image region located between the first set of image corners or the first set of image edges, wherein the matching object recognition template associated with the primary detection hypothesis is determined by the at least one processing circuit to satisfy the predefined matching condition when compared against the first image region, the matching object recognition template being a first matching object recognition template among the set of one or more matching object recognition templates; identify, based on the image information, a second set of image corners or a second set of image edges, wherein the second set of image corners include at least one image corner which is part of the first set of image corners and include at least one image corner which is outside of the first image region, and wherein the second set of image edges include at least one image edge which is part of the first set of image edges and include at least one image edge which is outside the first image region; identify a second image region, which is an image region located between the second set of image corners or the second set of image edges, wherein the second image region extends beyond the first image region, and wherein the set of one or more matching object recognition templates includes a second matching object recognition template, which is determined by the at least one processing circuit to satisfy the predefined template matching condition when compared against the second image region, wherein the at least one processing circuit is configured to generate the primary candidate region based on the first matching object recognition template, and to generate at least one candidate region in the safety volume list based on the second matching object recognition template.
35. The computing system of claim 21, wherein the at least one processing circuit is further configured to generate a new object recognition template based on the unmatched region, in response to a determination that the image information has the unmatched region.
15. The computing system of claim 1, wherein the at least one processing circuit is configured, in response to a determination that the image information has the portion representing the unmatched region, to generate a new object recognition template based on the unmatched region.
36. The computing system of claim 21, wherein the primary candidate region represents a first orientation for an object shape described by the primary object template, and wherein the at least one processing circuit is further configured to add, to the safety volume list, a candidate region that represents a second orientation for the object shape, the second orientation being perpendicular to the first orientation.
16. The computing system of claim 1, wherein the primary candidate region is a region representing a first orientation for an object shape described by the matching object recognition template associated with the primary detection hypothesis, and wherein the at least one processing circuit is configured to add, to the safety volume list, a candidate region which represents a second orientation for the object shape, the second orientation being perpendicular to the first orientation.
37. The computing system of claim 21, wherein the at least one processing circuit is further configured to add, to the safety volume list, a candidate region that represents a maximum object height.
17. The computing system of claim 1, wherein the at least one processing circuit is configured to add, to the safety volume list, a candidate region which represents a predefined maximum object height.
38. A non-transitory computer-readable medium having instructions that, when executed by at least one processing circuit of a computing system, causes the at least one processing circuit to: receive image information, generated by a camera, representing an object in a field of view of the camera, wherein the computing system is configured to communicate with: (i) a robot, and (ii) the camera; identify one or more object recognition templates corresponding to an object or an object type; select, a primary object template from among the one or more object recognition templates based on matc