DETAILED ACTION
This application includes independent claims 1 and 13; and dependent claims 2-12 and 14-24.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-6, 12, 13, 15-18, and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bastian, II et al. (US 2017/0066592)) in view of Kurz et al. (US 2014/0254874).
Regarding independent claim 1, Bastian discloses an autonomous guided vehicle comprising: a frame (1202) with a payload hold (1212); a drive section (1206) coupled to the frame with drive wheels (1102) supporting the autonomous guided vehicle on a traverse surface, the drive wheels effect vehicle traverse on the traverse surface moving the autonomous guided vehicle over the traverse surface in a facility; a payload handler (1216) coupled to the frame configured to transfer a payload, with a flat undeterministic seating surface (see at least para. 0083) seated in the payload hold, to and from the payload hold of the autonomous guided vehicle and a storage location, of the payload, in a storage array (see Fig. 12); a vision system (1224) mounted to the frame, having more than one camera (see para. 0100) disposed to generate binocular images (see para. 0100) of a field of a logistic space including rack structure shelving (1204) on which more than one objects are stored; and a controller, communicably connected to the vision system so as to register the binocular images (see paras 0077 and 0100). Bastian discloses all the limitations of the claim, but it does not disclose that the controller is configured to effect stereo matching, from the binocular images, resolving a dense depth map of imaged objects in the field, and the controller is configured to detect from the binocular images, stereo sets of keypoints, each set of keypoints setting out, separate and distinct from each other set, a common predetermined characteristic of each imaged object, so that the controller determines from the stereo sets of keypoints depth resolution of each object separate and distinct from the dense depth map; wherein the controller has an object extractor configured to determine location and pose of each imaged object from both the dense depth map resolved from the binocular images and the depth resolution from the stereo sets of keypoints. Bastian does not disclose how the stereo images are processed. However, Kurz discloses a similar vision system which includes a controller configured to effect stereo matching, from binocular images, resolving a dense depth map of imaged objects in the field, and the controller is configured to detect from the binocular images, stereo sets of keypoints, each set of keypoints setting out, separate and distinct from each other set, a common predetermined characteristic of each imaged object, so that the controller determines from the stereo sets of keypoints depth resolution of each object separate and distinct from the dense depth map (see Fig. 8 and para. 0135); wherein the controller has an object extractor configured to determine location and pose of each imaged object from both the dense depth map resolved from the binocular images and the depth resolution from the stereo sets of keypoints (see Fig. 8 and paras. 0135) for the purpose of processing stereo images received from cameras. It would have been obvious for a person of ordinary skill in the art, before the effective filing date of the applicant’s invention to utilize a controller configured to detect from the binocular images, stereo sets of keypoints, each set of keypoints setting out, separate and distinct from each other set, a common predetermined characteristic of each imaged object, so that the controller determines from the stereo sets of keypoints depth resolution of each object separate and distinct from the dense depth map; wherein the controller has an object extractor configured to determine location and pose of each imaged object from both the dense depth map resolved from the binocular images and the depth resolution from the stereo sets of keypoints, as disclosed by Kurtz, for the purpose of processing stereo images received from the cameras.
Regarding independent claim 13, Bastian discloses an autonomous guided vehicle comprising: a frame (1202) with a payload hold (1212); a drive section (1206) coupled to the frame with drive wheels (1102) supporting the autonomous guided vehicle on a traverse surface, the drive wheels effect vehicle traverse on the traverse surface moving the autonomous guided vehicle over the traverse surface in a facility; a payload handler (1216) coupled to the frame configured to transfer a payload, with a flat undeterministic seating surface (see at least para. 0083) seated in the payload hold, to and from the payload hold of the autonomous guided vehicle and a storage location, of the payload, in a storage array (see Fig. 12); a vision system (1224) mounted to the frame, having more than one camera (see para. 0100) disposed to generate binocular images (see para. 0100) of a field of a logistic space including rack structure shelving (1204) on which more than one objects are stored; and a controller, communicably connected to the vision system so as to register the binocular images (see paras 0077 and 0100). Bastian discloses all the limitations of the claim, but it does not disclose that the controller is configured to effect stereo matching, from the binocular images, resolving a dense depth map of imaged objects in the field, and the controller is configured to detect from the binocular images, stereo sets of keypoints, each set of keypoints setting out, separate and distinct from each other set, a common predetermined characteristic of each imaged object, so that the controller determines from the stereo sets of keypoints depth resolution of each object separate and distinct from the dense depth map; wherein the controller has an object extractor configured to identify location and pose of each imaged object based on superpose of stereo sets of keypoints depth resolution and depth map. Bastian does not disclose how the stereo images are processed. However, Kurz discloses a similar vision system which includes a controller configured to effect stereo matching, from binocular images, resolving a dense depth map of imaged objects in the field, and the controller is configured to detect from the binocular images, stereo sets of keypoints, each set of keypoints setting out, separate and distinct from each other set, a common predetermined characteristic of each imaged object, so that the controller determines from the stereo sets of keypoints depth resolution of each object separate and distinct from the dense depth map (see at least Fig. 8 and para. 0135); wherein the controller has an object extractor configured to identify location and pose of each imaged object based on superpose of stereo sets of keypoints depth resolution and depth map (see at least Fig. 8 and para. 0135) for the purpose of processing stereo images received from cameras. It would have been obvious for a person of ordinary skill in the art, before the effective filing date of the applicant’s invention to utilize a controller configured to detect from the binocular images, stereo sets of keypoints, each set of keypoints setting out, separate and distinct from each other set, a common predetermined characteristic of each imaged object, so that the controller determines from the stereo sets of keypoints depth resolution of each object separate and distinct from the dense depth map; wherein the controller has an object extractor configured to identify location and pose of each imaged object based on superpose of stereo sets of keypoints depth resolution and depth map, as disclosed by Kurtz, for the purpose of processing stereo images received from cameras.
Regarding dependent claims 3-6, 12, 15-18, and 24, Bastian discloses that the more than one camera generate a video stream and the registered images are parsed from the video stream (see at least para. 0100); the more than one camera are unsynchronized with each other (see at least para. 0100); the binocular images are generated with the vehicle in motion past the objects (see at least para. 0078); the more than one objects on the racks structure are dynamically positioned in closely packed juxtaposition with respect to each other (see at least para. 0083); and the controller is configured to generate at least one of an execute command and a stop command of a bot actuator based on the determined location and pose (see at least paras. 0081-0082).
Claim(s) 2 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bastian, II et al. (US 2017/0066592)) in view of Kurz et al. (US 2014/0254874) and further in view of Baldwin (US 11,317,031). The combination of Batian and Kurtz discloses all the limitations of the claims, but it does not disclose that the more than one camera are rolling shutter cameras. However, Baldwin discloses a similar device which utilizes a rolling shutter camera (see col. 3, line 66) for the purpose acquiring images in areas having strong ambient illumination (see col. 2, lines 30-45). It would have been obvious for a person of ordinary skill in the art, before the effective filing date of the applicant’s invention, to utilize rolling shutter cameras, as disclosed by Baldwin, for the purpose acquiring images in areas having strong ambient illumination.
Allowable Subject Matter
Claims 7-11 and 19-23 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Li et al. (US 12,534,297) and Panzarella et al. (US 2025/0181081) disclose autonomous guided vehicles, used to pick items from storage locations, which utilize vision systems which extract cloud point data from stereo images to identify items.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PATRICK HEWEY MACKEY whose telephone number is (571)272-6916. The examiner can normally be reached M - F 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael McCullough can be reached at 571-272-7805. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PATRICK H MACKEY/Primary Examiner, Art Unit 3653