DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
This application is a 371 national stage of PCT/US2023/016641 filed 03/28/2023, which claims priority to the provisional application filed 03/28/2022.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 08/28/2024 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: 822a-b, 902. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
The disclosure is objected to because of the following informalities:
In paragraph 88, line 1, more than one space is used between “problem” and “of”
In paragraph 97, line 2, “110a,b” should read “110a, b”
In paragraph 97, line 4, “154a,b” should read “154a, b”
In paragraph 101, line 5, “p reforms” should read “performs”
In paragraph 121, line 1, more than one space is used between “4,” and “in”
In paragraph 126, line 6, more than one space is used between “the” and “rollertop”, and “cart” and “where”
In paragraph 128, line 7, “precessing” should read “processing”
In paragraph 154, line 2, 782s needs to be 782a
In paragraph 157, Line 1, more than one space is used between “882” and “of”
In paragraph 166, Line 2, “combined” should read “combine”
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-27 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite.
Claim limitation “payload engagement apparatus” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. While Claim 1, 24, 26, and 27 and the specification say "payload engagement apparatus", the drawings do not show "payload engagement apparatus ", so it is unclear what "payload engagement apparatus" means in the context of the application. Additionally, the specification states that “…a payload engagement apparatus configured to pick and/or drop a payload at the location” and “the payload engagement apparatus is configured to process the signal to deliver the payload to the horizontal surface” therefore it is unclear whether the “payload engagement apparatus” is part of the invention’s payload lifting system or its control system. Therefore, claim 1-27 are indefinite and are rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3, 5-14, 16-19, and 27 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Bell (US20110218670A1).
Re Claim 1, Bell discloses a robotic vehicle (Fig. 2), comprising:
a navigation system configured to autonomously navigate the vehicle to a location (Paragraph 0057, “The manager 626 includes software code (e.g., processor executable instructions) that is configured to instruct the automated vehicle, such as the forklift, to execute each and every task, for example transporting object loads.”),
a payload engagement apparatus configured to pick and/or drop a payload at the location (Paragraph 0021, “Automated vehicle software uses the orientation information to position one or more lifting elements, such as forks, for optimal insertion into entry points of the object load. Then, the automated vehicle software uses path information to transport and place the object load at a target destination as describe further below.”);
one or more sensors configured to collect three-dimensional (3D) sensor data of an infrastructure at the location (Paragraph 0049, “FIG. 5A illustrates a scanning process to generate laser scanner data for a horizontal plane (i.e., an x-y plane) comprising the rack system 502. FIG. 5B is an image illustrating a vertical plane (i.e., a y-z plane) in front of the rack system 502. The laser scanner data and/or the image are used to determine relative distances from a forklift to the rack system 502. A portion of the rack system 502 may be a target destination for an object load (e.g., the object 402 of FIG. 4) as explained further below.”); and
at least one processor in communication with at least one storage device (Paragraph 0056, “The central computer 106 is a type of computing device (e.g., a laptop computer, a desktop computer, a Personal Desk Assistant (PDA) and the like) that comprises a central processing unit (CPU) 616, various support circuits 618 and a memory 620. The CPU 616 may comprise one or more commercially available microprocessors or microcontrollers that facilitate data processing and storage.”) and configured to process the collected sensor data to perform an infrastructure localization analysis to determine if the infrastructure is a modeled infrastructure type (Paragraph 0050, “Once an object recognition process identifies the rack system 502 by comparing rack system models with data captured by the laser scanner 304 and a camera, various software modules define an entry point orientation associated with a shelf 504 within the rack system 502. In some embodiments, the entry point orientation includes numerous measurements indicating angular displacement, such as Ry, Rx 514 and Rz 510, and linear displacement, such as Ty 508, Tx 506 and Tz 512, about the x, y and z-axes.”) and if so to determine if a horizontal surface of the infrastructure is obstruction free (Paragraph 0051, “Furthermore, the various software modules determine whether a pallet or another object load is occupying a target destination prior to placing the object load.”).
Re Claim 2, Bell discloses wherein the mobile robotics vehicle is an autonomous mobile robot forklift (Fig. 2).
Re Claim 3, Bell discloses wherein the one or more sensors comprises at least one 3D sensor (Paragraph 0062, “In another embodiment, the one or more laser scanners (e.g., three-dimensional laser scanners) analyze objects within the physical environment and capture data relating to various physical attributes, such as size and shape.”).
Re Claim 5, Bell discloses wherein the at least one 3D sensor comprises at least one stereo camera and/or 3D camera (Paragraph 0079, “The environment sensing module (e.g., the environment sensing module 630 of FIG. 3) applies image processing techniques on images of the industrial environment to identify the object load. For example, the environment sensing module may combine consecutive images to identify three-dimensional objects within a camera field of view.”).
Re Claim 6, Bell discloses wherein the one or more sensors includes one or more onboard vehicle sensors (Paragraph 0034, “The forklift 200 is also coupled with the sensor array 108, which transmits data (e.g., image data, video data, range map data and/or three-dimensional graph data) to the mobile computer 104, which stores the sensor array data according to some embodiments. As described in detail further below, the sensor array 108 includes various devices, such as a laser scanner and a camera, for capturing the sensor array data associated with an object load.”).
Re Claim 7, Bell discloses wherein the sensor data includes point cloud data (Paragraph 0062, “The laser scanner creates a point cloud of geometric samples on the surface of the subject.”).
Re Claim 8, Bell discloses wherein the at least one processor is further configured to determine features of the infrastructure and/or the horizontal surface from the sensor data and perform the infrastructure localization analysis based, at least in part, on the features of the
infrastructure and/or the horizontal surface (Paragraph 0063, “The data produced by the laser scanner indicates a distance to each point on each object surface. Based on these distances, the object recognition process 628 determines a three dimensional position of the each point in a local coordinate system relative to each laser scanner. The environment sensing module 630 transposes each three-dimensional position to be relative to the vehicle.”).
Re Claim 9, Bell discloses wherein the at least one processor is further configured to compare the features of the infrastructure and/or horizontal surface to features of the modeled infrastructure type to determine if the features of the infrastructure and/or horizontal surface indicate that the infrastructure at the location matches the modeled infrastructure type and if so the infrastructure is localized (Paragraph 0071, “At step 808, an object recognition process is executed. Various software modules, such as the environment sensing module (e.g., the environment sensing module 630 of FIG. 6), perform the object recognition process (e.g., the object recognition process 628 of FIG. 6) by comparing the sensor array data with the various object models as described in the present disclosure… As another example, the object recognition process may utilize feature extraction processing techniques, such as edge detection, to identify the particular object, such as a rack system.”).
Re Claim 10, Bell discloses wherein the features of the modeled infrastructure type include dimensions of one or more edges of a modeled horizontal surface (Paragraph 0053, “Then, the various software modules fit the matching rack system against the image as depicted in FIG. 5B to compute the value for Rx 514. In one embodiment, feature extraction processing techniques, such as edge detection, may be utilized to identify the rack system 502 and compute the various measurements that constitute the entry point orientation of the shelf 504.”).
Re Claim 11, Bell discloses wherein the features of the modeled infrastructure type include dimensions of a plurality of edges of the modeled horizontal surface (Paragraph 0053, “Then, the various software modules fit the matching rack system against the image as depicted in FIG. 5B to compute the value for Rx 514. In one embodiment, feature extraction processing techniques, such as edge detection, may be utilized to identify the rack system 502 and compute the various measurements that constitute the entry point orientation of the shelf 504.”).
Re Claim 12, Bell discloses the modeled horizontal surface is a drop surface configured to support the payload (Paragraph 0049, “FIG. 5A illustrates a scanning process to generate laser scanner data for a horizontal plane (i.e., an x-y plane) comprising the rack system 502. FIG. 5B is an image illustrating a vertical plane (i.e., a y-z plane) in front of the rack system 502. The laser scanner data and/or the image are used to determine relative distances from a forklift to the rack system 502. A portion of the rack system 502 may be a target destination for an object load (e.g., the object 402 of FIG. 4) as explained further below.”).
Re Claim 13, Bell discloses wherein the features of the modeled infrastructure type include a height of the drop surface (Fig. 5B, Paragraph 0051, “The various software modules cooperate to identify and locate the shelf 504 in a coordinate system, relative to the automated vehicle, using values for the linear displacement measurements Tx 506, Ty 508 and Tz 512.”).
Re Claim 14, Bell discloses wherein the features of the modeled infrastructure type include an orientation of the drop surface (Fig. 5A-B, Paragraph 0050, “Once an object recognition process identifies the rack system 502 by comparing rack system models with data captured by the laser scanner 304 and a camera, various software modules define an entry point orientation associated with a shelf 504 within the rack system 502. In some embodiments, the entry point orientation includes numerous measurements indicating angular displacement, such as Ry, Rx 514 and Rz 510, and linear displacement, such as Ty 508, Tx 506 and Tz 512, about the x, y and z-axes.”).
Re Claim 16, Bell discloses wherein the modeled horizontal surface is predefined as a number of points or a point density, wherein the point density is a number of points per square meter of surface (Paragraph 0062, “In another embodiment, the one or more laser scanners (e.g., three-dimensional laser scanners) analyze objects within the physical environment and capture data relating to various physical attributes, such as size and shape. The captured data can then be compared with three-dimensional object models. The laser scanner creates a point cloud of geometric samples on the surface of the subject. These points can then be used to extrapolate the shape of the subject (i.e., reconstruction)”).
Re Claim 17, Bell discloses wherein the at least one processor is further configured to localize the infrastructure based on one or more of the edges of the infrastructure matching one or more edges of the modeled infrastructure type (Paragraph 0053, “Then, the various software modules fit the matching rack system against the image as depicted in FIG. 5B to compute the value for Rx 514. In one embodiment, feature extraction processing techniques, such as edge detection, may be utilized to identify the rack system 502 and compute the various measurements that constitute the entry point orientation of the shelf 504.”).
Re Claim 18, Bell discloses wherein the at least one processor is further configured to localize the infrastructure based on one or more of the edges of the horizontal surface matching one or more edges of the modeled horizontal surface (Paragraph 0063, “The data produced by the laser scanner indicates a distance to each point on each object surface. Based on these distances, the object recognition process 628 determines a three dimensional position of the each point in a local coordinate system relative to each laser scanner.”).
Re Claim 19, Bell discloses wherein the at least one processor is further configured to localize the infrastructure based on the height and orientation of the drop surface matching the modeled infrastructure type (Paragraph 0050, “Once an object recognition process identifies the rack system 502 by comparing rack system models with data captured by the laser scanner 304 and a camera, various software modules define an entry point orientation associated with a shelf 504 within the rack system 502. In some embodiments, the entry point orientation includes numerous measurements indicating angular displacement, such as Ry, Rx 514 and Rz 510, and linear displacement, such as Ty 508, Tx 506 and Tz 512, about the x, y and z-axes.”).
Re Claim 27, Bell discloses a method of horizontal infrastructure assessment (Fig. 8), comprising:
providing a robotic vehicle comprising a navigation system configured to autonomously
navigate the vehicle to a location (Paragraph 0069, “FIG. 8 is a flow diagram of a method 800 for sensing object load engagement, transportation and disengagement by automated vehicles according to various embodiments of the present invention. An environment sensing module within a central computer performs the method 800 according to some embodiments.”),
a payload engagement apparatus configured to pick and/or drop a payload at the location (Paragraph 0027, “The vehicle 102 utilizes one or more lifting elements, such as forks, to lift one or more units 114 and then, transport these units 114 along a path (e.g., a pre-defined route or a dynamically computed route) to be placed at a designated location.”),
one or more sensors (Paragraph 0070, “At step 804, sensor array data is processed. As explained in the present disclosure, a sensor array (e.g., the sensor array 108 of FIG. 1 and/or the sensor head 320 of FIG. 3) includes various devices, such as a laser scanner and/or a camera, for capturing data associated with various objects.”), and
at least one processor in communication with at least one storage device (Paragraph 0070, “These devices transmit image data and/or laser scanner data, which is stored in a mobile computer as the sensor array data (e.g., the sensor array data 610 of FIG. 6) according to some embodiments.”);
the one or more sensors collecting three-dimensional (3D) sensor data of an infrastructure at the location (Paragraph 0070, “At step 804, sensor array data is processed. As explained in the present disclosure, a sensor array (e.g., the sensor array 108 of FIG. 1 and/or the sensor head 320 of FIG. 3) includes various devices, such as a laser scanner and/or a camera, for capturing data associated with various objects.”); and
the at least one processor processing the collected sensor data to perform an infrastructure localization analysis to determine if the infrastructure is a modeled infrastructure type (Paragraph 0070, At step 804, sensor array data is processed. As explained in the present disclosure, a sensor array (e.g., the sensor array 108 of FIG. 1 and/or the sensor head 320 of FIG. 3) includes various devices, such as a laser scanner and/or a camera, for capturing data associated with various objects) and if so to determine if a horizontal surface of the infrastructure is obstruction free (Paragraph 0051, “Furthermore, the various software modules determine whether a pallet or another object load is occupying a target destination prior to placing the object load.”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Bell (US20110218670A1) in view of Diankov (US20200376670A1).
Re Claim 4, While Bell has 3D laser scanning (at least Paragraph 0062) , Bell does not explicitly disclose wherein the at least one 3D sensor comprises at least one 3D LiDAR scanner system. However, Diankov teaches a robotic system that uses LiDAR sensors to obtain information needed for completing tasks (Paragraph 0043-0044, “The robotic system 100 can include the sensors 216 configured to obtain information used to implement the tasks, such as for manipulating the structural members and/or for transporting the robotic units… In some embodiments, for example, the sensors 216 can include one or more imaging devices 222 (e.g., visual and/or infrared cameras, two-dimensional (2D) and/or 3D imaging cameras, distance measuring devices such as lidars or radars, etc.) configured to detect the surrounding environment.”). Thus, it would be obvious to a person of ordinary skill in the art at the time of the effective filing of the application to modify Bell’s forklift with Diankov’s LiDAR sensors because the LiDARs are distance sensors that would allow the forklifts sensor array to provide high resolution 3D mapping and accurate surface measurements.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Bell (US20110218670A1) in view of Kanunikov (CN112589791A provided translation).
Re Claim 15, Bell does not explicitly disclose wherein the features of the modeled infrastructure type include a surface density of the drop surface. However, Kanunikov teaches a robotic system that uses the max footprint density (occupancy area density) for determining the placement position of its parcels (Page 14, The robotic system 100 may evaluate or score each of the placement combinations 744 according to one or more predetermined criteria… An example of the criterion may include maximization of footprint density. The robot system 100 can calculate the footprint density of the outer periphery 762 of the parcel group.”).
Thus, it would be obvious to a person of ordinary skill in the art at the time of the effective filing of the application to modify Bell’s forklift with Kanunikov’s robotic system usage of density because it would allow the forklift’s sensor array to acquire density data for precise calculations of slope and aspect.
Claims 20-26 are rejected under 35 U.S.C. 103 as being unpatentable over Bell (US20110218670A1) in view of Yoshida (JP2011093058A provided translation).
Re Claim 20, Bell teaches a forklift that is able determine if there is an obstruction with the target destination of its pallets (Paragraph 0051, “Furthermore, the various software modules determine whether a pallet or another object load is occupying a target destination prior to placing the object load.”) but does not explicitly disclose wherein the at least one processor is further configured to generate a volume of interest (VOI) that has the same or greater dimensions than the payload and to use the VOI to determine if the horizontal surface is obstruction free.
However, Yoshida teaches a robotic holding mechanism that generates a region to determine if an object of the same size can be gripped within that space ((Page 2, “In step S300, an area having the same size as the selected gripping area in the three-dimensional information is extracted as a grippable area when the extraction condition is satisfied. Here, the extraction condition is, “In the region having the same size as the selected gripping region in the three-dimensional information, the object exists in the entire region having the same size as the gripping portion region, but the size is the same as the gripping mechanism region. There is no object in the area..”).
Thus, it would be obvious to a person of ordinary skill in the art at the time of the effective filing of the application to modify Bell’s forklift with Yoshida’s object determination in order to give further validation to the forklift’s system on whether or not the surface was obstructed.
Re Claim 21, Bell does not explicitly disclose wherein the dimensions of the VOI are substantially the same as the dimensions of the payload. However, Yoshida teaches a robotic holding mechanism whose generated selected region is the same size as its target (Page 2, “In step S300, an area having the same size as the selected gripping area in the three-dimensional information is extracted as a grippable area when the extraction condition is satisfied. Here, the extraction condition is, ‘In the region having the same size as the selected gripping region in the three-dimensional information, the object exists in the entire region having the same size as the gripping portion region, but the size is the same as the gripping mechanism region. There is no object in the area…’”). Thus, it would be obvious to a person of ordinary skill in the art at the time of the effective filing of the application to modify Bell’s forklift with Yoshida’s region generation because of the forklifts need for its selection region about its target to be precise.
Re Claim 22, Bell discloses wherein if the infrastructure is localized, the processor is further configured to associate (Paragraph 0051, “The various software modules cooperate to identify and locate the shelf 504 in a coordinate system, relative to the automated vehicle, using values for the linear displacement measurements Tx 506, Ty 508 and Tz 512. The value for the Tx 506 may refer to a depth at which an object load is to be placed and/or engaged. The various software modules also cooperate to determine values for the angular displacement measurements Rx 514 and Rz 510 of the shelf 504. Furthermore, the various software modules determine whether a pallet or another object load is occupying a target destination prior to placing the object load.”). As written in Claim 20 rejection, Yoshida discloses the VOI. Since Bell already determines if another pallet is already occupying the target destination shelf of the current pallet it is transporting, it would be obvious to a person of ordinary skill in the art at the time of the effective filing of the application to modify Bell’s forklift with Yoshida’s region generation because the selected region generated would match both the pallet being transported by the forklift and the shelf at which the pallet would be delivered to and allow detection of obstructions in the desired area of interest.
Re Claim 23, Modified Bell discloses wherein if an obstruction is not indicated within the VOI, the processor is further configured to generate a signal indicating that the horizontal infrastructure is obstruction free (Bell, Paragraph 0043, “The laser scanner 304 and the camera 306 enable obstacle detection at the target destination because mounting these devices below the forks 302 allows various software modules to determine if the target destination is clear of any obstructions before unloading the object load. The various software modules search for such obstructions by examining the sensor array data.”; Yoshida discloses the VOI in Claim 20 rejection).
Re Claim 24, Bell discloses wherein if the horizontal infrastructure is obstruction free, the payload engagement apparatus is configured to process the signal to deliver the payload to the horizontal surface (Paragraph 0043, “If the laser scanner does not detect any points then there are no obstructions above or near the target destination and the forklift 200 can unload the object load successfully.”).
Re Claim 25, Modified Bell discloses wherein if an obstruction is indicated within the VOI, the processor is further configured to generate a signal indicating that the horizontal infrastructure is not obstruction free (Bell, Paragraph 0043, “The laser scanner 304 and the camera 306 enable obstacle detection at the target destination because mounting these devices below the forks 302 allows various software modules to determine if the target destination is clear of any obstructions before unloading the object load. The various software modules search for such obstructions by examining the sensor array data.”; Yoshida discloses the VOI in Claim 20 rejection).
Re Claim 26, Bell discloses wherein if the horizontal infrastructure is not obstruction free, the payload engagement apparatus is configured to process the signal to abort delivery of the payload to the horizontal surface (Paragraph 0043, “The laser scanner 304 and the camera 306 enable obstacle detection at the target destination because mounting these devices below the forks 302 allows various software modules to determine if the target destination is clear of any obstructions before unloading the object load. The various software modules search for such obstructions by examining the sensor array data.”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW KILLIAN PEPPER whose telephone number is (571)272-6815. The examiner can normally be reached Monday - Friday 7:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached at (571) 270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.K.P./Examiner, Art Unit 3657 /ABBY LIN/Supervisory Patent Examiner, Art Unit 3657