DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
This application is a 371 national stage of PCT/US23/16612 filed 03/28/2023, which claims priority to the provisional application filed 03/28/2022.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 08/15/2024 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Specification
The disclosure is objected to because of the following informalities:
The spacing of the lines of the specification is such as to make reading difficult. New application papers with lines 1 1/2 or double spaced (see 37 CFR 1.52(b)(2)) on good quality paper are required.
In paragraph 007, line 1, a comma should be used after “vehicle” or the comma after “(AMR)” should be deleted.
In paragraph 0048, line 6, a preposition should be used to bridge “associated” and “different”.
In paragraph 0052. line 1, a comma should be used after “embodiments”.
In paragraph 0055, line 3, “of any,” should read “if any,”.
Appropriate correction is required.
Claim Objections
Claim 11 objected to because of the following informalities: In line 3-7, the structure is listed before the action, therefore the method claim isn’t positively recited. Appropriate correction is required.
Claim 12-13 objected to because of the following informalities: The structure is listed before the action, therefore the method claim isn’t positively recited. Appropriate correction is required.
Claim 15 objected to because of the following informalities: In line 2-4, the structure is listed before the action, therefore the method claim isn’t positively recited. Appropriate correction is required.
Claim 17 objected to because of the following informalities: In line 2-3, the structure is listed before the action, therefore the method claim isn’t positively recited. Appropriate correction is required.
Claim 18 objected to because of the following informalities: In line 2, the structure is listed before the action, therefore the method claim isn’t positively recited. Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a pallet detection system configured to provide a pose of a payload;” in claim 1. The specification appears to describe it as software or processor with software to analyze sensor data throughout the spec and also incorporated by reference through US "20220100195".
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 7, 8, 12, 17, and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 7 recites the limitation "the at least one second region" in line 3. There is insufficient antecedent basis for this limitation in the claim. A second region was not mentioned previously in claim 1-4, so it is unclear where the second region came from.
Claim 12 recites the limitation "the at least one processor" in line 2. There is insufficient antecedent basis for this limitation in the claim. A processor was not mentioned previously in claim 11.
Claim 17 recites the limitation "the at least one first region" and “the at least one second region” in line 3. There is insufficient antecedent basis for this limitation in the claim. A first region and second region were not mentioned previously in claim 11.
Claim 18 recites the limitation "the processor" in line 2. There is insufficient antecedent basis for this limitation in the claim. A processor was not mentioned previously in claim 11 and 17
Claims dependent on the above claims are also rejected because they do not resolve their parent claim's deficiencies.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 11, 19-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Penghui (CN212669124U provided translation).
Re Claim 11, Penghui discloses an object segmentation method for use by autonomous mobile robot (AMR) (Page 8, “It should be noted that the foregoing method embodiment and the forklift embodiment belong to the same concept, and the specific implementation process is detailed in the forklift embodiment, and the technical features in the forklift embodiment are correspondingly applicable to the method embodiment, and will not be repeated here.”) the method comprising:
at least one sensor acquiring point cloud data (Page 1, “A plurality of the lidars are respectively arranged on both sides of the front end of the operation platform, and the lidar has a predetermined height from the ground, and is used to scan the environment in front of and on both sides of the forklift within a preset scanning range”; Page 2, “In this embodiment, the lidar adopts a measurement type lidar, which can obtain the measurement data of the original point cloud of the object, and solves the obstacle avoidance type laser used in the background technology to avoid obstacles in the low-position plane.)
a pallet detection system configured to provide a pose of a payload; (Page 2, “The rear depth camera can simultaneously use the RBG information and the depth information to identify and estimate the position and pose of the target to be inserted by the forklift, so as to perform precise insertion actions on goods with uncertain positions.” Target is interpreted to be payload. )
and an object segmentation system segmenting detected objects into obstructions and allowed objects based on the point cloud data, the pose of the payload, and the semantic data about the payload (Page 1, “A plurality of the lidars are respectively arranged on both sides of the front end of the operation platform, and the lidar has a predetermined height from the ground, and is used to scan the environment in front of and on both sides of the forklift within a preset scanning range; The depth sensor is arranged at the front end of the console, and the depth sensor is located between the lidars on both sides in the left and right direction, and is used to collect static objects around the forklift or moving objects in the physical environment For visual obstacle avoidance; The rear depth camera is arranged at the middle position of the rear end of the operating table, and is used for the forklift to insert and extract the target and accurately locate the position of the inserting target.”; Page 2, “The rear depth camera can simultaneously use the RBG information and the depth information to identify and estimate the position and pose of the target to be inserted by the forklift, so as to perform precise insertion actions on goods with uncertain positions.” Semantic data of the payload includes its dimensions.)
Re Claim 19, Penghui discloses wherein the at least one sensor comprises at least one of a LiDAR scanner and a 3D camera (Page 1, “A plurality of the lidars are respectively arranged on both sides of the front end of the operation platform, and the lidar has a predetermined height from the ground, and is used to scan the environment in front of and on both sides of the forklift within a preset scanning range; The depth sensor is arranged at the front end of the console, and the depth sensor is located between the lidars on both sides in the left and right direction, and is used to collect static objects around the forklift or moving objects in the physical environment For visual obstacle avoidance; The rear depth camera is arranged at the middle position of the rear end of the operating table, and is used for the forklift to insert and extract the target and accurately locate the position of the inserting target.”).
Re Claim 20, Penghui discloses wherein the AMR includes a pair of forks and the payload is a palletized payload (Page 4, “In this embodiment, after the forklift reaches the vicinity of the predetermined position, the depth sensor 30 has preliminarily confirmed the coordinate position of the insertion target (for example, a pallet).”; Fig. 2).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-3, 9-10, 12-13, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Penghui (CN212669124U provided translation).
Re Claim 1, Penghui discloses an autonomous mobile robot (AMR) (Fig. 1-2), comprising:
at least one sensor configured to acquire point cloud data; (Page 1, “A plurality of the lidars are respectively arranged on both sides of the front end of the operation platform, and the lidar has a predetermined height from the ground, and is used to scan the environment in front of and on both sides of the forklift within a preset scanning range;”)
a pallet detection system configured to provide a pose of a payload; (Page 2, “The rear depth camera can simultaneously use the RBG information and the depth information to identify and estimate the position and pose of the target to be inserted by the forklift, so as to perform precise insertion actions on goods with uncertain positions.”)
and an object segmentation system comprising computer program code executable by the at least one processor to segment detected objects into obstructions and allowed objects based on the point cloud data, the pose of the payload, and semantic data about the payload (Page 1, “A plurality of the lidars are respectively arranged on both sides of the front end of the operation platform, and the lidar has a predetermined height from the ground, and is used to scan the environment in front of and on both sides of the forklift within a preset scanning range; The depth sensor is arranged at the front end of the console, and the depth sensor is located between the lidars on both sides in the left and right direction, and is used to collect static objects around the forklift or moving objects in the physical environment For visual obstacle avoidance; The rear depth camera is arranged at the middle position of the rear end of the operating table, and is used for the forklift to insert and extract the target and accurately locate the position of the inserting target.”; Page 2, “The rear depth camera can simultaneously use the RBG information and the depth information to identify and estimate the position and pose of the target to be inserted by the forklift, so as to perform precise insertion actions on goods with uncertain positions.” Semantic data of the payload includes its dimensions.)
Penghui does not explicitly disclose at least one processor in communication with at least one computer memory device. However, it is stated that the forklift’s systems perform Kalman filtering on the LiDAR data (Page 5, “Specifically: the visual inertial odometer data output by the front follower camera 50 is time-synchronized with the roulette encoder odometer data and lidar data, and then the visual inertial odometer data and roulette encoder odometer data synchronized with the elapsed time are time-synchronized. Kalman filtering is performed with the lidar data to obtain the optimal state estimation.”) to allow the forklift to avoid obstacles (Page 2, “In this embodiment, the lidar adopts a measurement type lidar, which can obtain the measurement data of the original point cloud of the object, and solves the obstacle avoidance type laser used in the background technology to avoid obstacles in the low-position plane.”). Thus, it would be obvious to a person of ordinary skill in the art at the time of the effective filing of the application to have a processor and memory device in the forklift because they would allow the forklift to process and store data, and avoid obstacles.
Re Claim 2, Penghui discloses wherein the at least one processor provides an expected pose of the payload (Page 2, “The rear depth camera can simultaneously use the RBG information and the depth information to identify and estimate the position and pose of the target to be inserted by the forklift, so as to perform precise insertion actions on goods with uncertain positions.”).
Re Claim 3, Penghui does not explicitly disclose wherein the object segmentation system generates at least one first region around the payload based on the pose of the payload and the expected pose of the payload. However, it is stated that the pose of the payload is the first thing the forklift’s object segmentation system acquirees once it gets in the near vicinity of the target (Page 4, “In this embodiment, after the forklift reaches the vicinity of the predetermined position, the depth sensor 30 performs preliminary positioning of the insertion target (for example, a pallet), and confirms the coordinate position of the pallet. At this time, the forklift makes a U-turn, and the rear depth camera 40 is used to accurately position the pallet.” Preliminary position is equivalent to the expected position and the confirmation of the coordinate position is equivalent to the actual pose). Thus, it would be obvious to a person of ordinary skill in the art at the time of the effective filing of the application that the at least one first region of the forklifts path planning would be based around the payload.
Re Claim 9, Penghui discloses wherein the at least one sensor comprises at least one of a LiDAR scanner and a 3D camera (Page 1, “A plurality of the lidars are respectively arranged on both sides of the front end of the operation platform, and the lidar has a predetermined height from the ground, and is used to scan the environment in front of and on both sides of the forklift within a preset scanning range… The rear depth camera is arranged at the middle position of the rear end of the operating table, and is used for the forklift to insert and extract the target and accurately locate the position of the inserting target.”).
Re Claim 10, Penghui discloses wherein the AMR includes a pair of forks and the payload is a palletized payload (Page 4, “In this embodiment, after the forklift reaches the vicinity of the predetermined position, the depth sensor 30 has preliminarily confirmed the coordinate position of the insertion target (for example, a pallet).”; Fig. 2)
Re Claim 12, Penghui does not explicitly disclose at least one processor providing an expected pose of the payload. However, it is stated that the forklift’s systems perform Kalman filtering on the LiDAR data (Page 5, “Specifically: the visual inertial odometer data output by the front follower camera 50 is time-synchronized with the roulette encoder odometer data and lidar data, and then the visual inertial odometer data and roulette encoder odometer data synchronized with the elapsed time are time-synchronized. Kalman filtering is performed with the lidar data to obtain the optimal state estimation.”) and acquires the expected pose of the target through the rear depth camera (Page 2, “The rear depth camera can simultaneously use the RBG information and the depth information to identify and estimate the position and pose of the target to be inserted by the forklift, so as to perform precise insertion actions on goods with uncertain positions.”). Thus, it would be obvious to a person of ordinary skill in the art at the time of the effective filing of the application to have a processor in the forklift because it would allow the forklift to process the data collected by the LiDARs and rear depth camera for providing the target’s expected pose.
Re Claim 13, Penghui does not explicitly disclose including the object segmentation system generating at least one first region around the payload based on the pose of the payload and the expected pose of the payload. However, it is stated that the pose of the payload is the first thing the forklift’s object segmentation system acquirees once it gets in the near vicinity of the target (Page 4, “In this embodiment, after the forklift reaches the vicinity of the predetermined position, the depth sensor 30 has preliminarily confirmed the coordinate position of the insertion target (for example, a pallet). At this time, the forklift turns around, and the rear depth camera 40 recognizes the pallet and accurately positions the pallet. Specifically, the rear depth camera 40 is used to accurately align the slot of the pallet, so that the vehicle body The fork 12 can be precisely aligned and inserted into the slot of the pallet.” Preliminary position is equivalent to the expected position and the confirmation of the coordinate position is equivalent to the actual pose). Thus, it would be obvious to a person of ordinary skill in the art at the time of the effective filing of the application that the at least one first region of the forklifts path planning would be based around the payload.
Re Claim 17, Penghui does not explicitly disclose including the object segmentation system filtering out points from the point cloud data based on the at least one first region and the at least one second region. However, it is stated that the forklift’s systems perform Kalman filtering on the LiDAR data (Page 5, “Specifically: the visual inertial odometer data output by the front follower camera 50 is time-synchronized with the roulette encoder odometer data and lidar data, and then the visual inertial odometer data and roulette encoder odometer data synchronized with the elapsed time are time-synchronized. Kalman filtering is performed with the lidar data to obtain the optimal state estimation.”). Thus, it would be obvious to a person of ordinary skill in the art at the time of the effective filing of the application that so long as the system is being fed point cloud data found by the LiDARs, the Kalman filter will be able to filter out the points of these scanned regions.
Re Claim 18, Penghui does not explicitly disclose including the processor excluding the filtered out points for obstruction detection. However, it is stated that system cross checks the data from both LiDAR sensors and calculates the error between the two (Page 7, “In an embodiment, in the step S2, the scan match processing of the left and right lidar data respectively obtains the state estimation between the respective frames and the frame, and at the same time obtains the error estimation and the variance of the respective state estimation”). Thus, it would be obvious to since a person of ordinary skill in the art at the time of the effective filing of the application that the Kalman filter would filter out this data error in order to obtain optimal state estimation for the forklift’s obstruction detection.
Claims 4, 7-8, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Penghui (CN212669124U provided translation) in view of Wei (CN111126269A provided translation).
Re Claim 4, Penghui teaches that the loaded pallet his robot collects is square from a top view (Fig. 4 and Page 3, “4 is a schematic diagram of the scanning range of a two-dimensional single-line lidar when a forklift is fully loaded with a pallet according to an embodiment of the present invention”) and that the rear depth camera determines the position and pose of the loaded pallet using depth (Page 2, “The rear depth camera can simultaneously use the RBG information and the depth information to identify and estimate the position and pose of the target to be inserted by the forklift, so as to perform precise insertion actions on goods with uncertain positions”), but does not explicitly disclose wherein the at least one first region is at least one three-dimensional box. However, Wei teaches a method of creating a three-dimensional box region about a targeted object (Page 1, “…a three-dimensional target detection method is provided, including: setting a first coordinate center of a target object in a monocular image as a second coordinate center of a 3D bounding box of the target object; acquiring an acquisition location”). Thus, it would be obvious to a person of ordinary skill in the art at the time of the effective filing of the application to modify Penghui’s robot with Wei’s teaching of target box bounding because it would provide the robot with a more efficient manner of path planning for a three-dimensional pallet or any target object.
Re Claim 7, Penghui does not explicitly disclose wherein the object segmentation system is configured to filter out points from the point cloud data based on the at least one first region and the at least one second region. However, it is stated that the forklift’s systems perform Kalman filtering on the LiDAR data (Page 5, “Specifically: the visual inertial odometer data output by the front follower camera 50 is time-synchronized with the roulette encoder odometer data and lidar data, and then the visual inertial odometer data and roulette encoder odometer data synchronized with the elapsed time are time-synchronized. Kalman filtering is performed with the lidar data to obtain the optimal state estimation.”). Thus, it would be obvious to a person of ordinary skill in the art at the time of the effective filing of the application that so long as the system is being fed point cloud data found by the LiDARs, the Kalman filter will be able to filter out the points of these scanned regions.
Re Claim 8, Penghui does not explicitly disclose wherein the processor is configured to not use the filtered out points for obstruction detection. However, it is stated that system cross checks the data from both LiDAR sensors and calculates the error between the two (Page 7, “In an embodiment, in the step S2, the scan match processing of the left and right lidar data respectively obtains the state estimation between the respective frames and the frame, and at the same time obtains the error estimation and the variance of the respective state estimation”). Thus, it would be obvious to since a person of ordinary skill in the art at the time of the effective filing of the application that the Kalman filter would filter out this data error in order to obtain optimal state estimation for the forklift’s obstruction detection.
Re Claim 14, , Penghui teaches that the loaded pallet his robot collects is square from a top view (Fig. 4 and Page 3, “4 is a schematic diagram of the scanning range of a two-dimensional single-line lidar when a forklift is fully loaded with a pallet according to an embodiment of the present invention”) and that the rear depth camera determines the position and pose of the loaded pallet using depth (Page 2, “The rear depth camera can simultaneously use the RBG information and the depth information to identify and estimate the position and pose of the target to be inserted by the forklift, so as to perform precise insertion actions on goods with uncertain positions”), but does not explicitly disclose wherein the at least one first region is at least one three-dimensional box. However, Wei teaches a method of creating a three-dimensional box region about a targeted object (Page 1, “…a three-dimensional target detection method is provided, including: setting a first coordinate center of a target object in a monocular image as a second coordinate center of a 3D bounding box of the target object; acquiring an acquisition location”). Thus, it would be obvious to a person of ordinary skill in the art at the time of the effective filing of the application to modify Penghui’s robot with Wei’s teaching of target box bounding because it would provide the robot with a more efficient manner of path planning for a three-dimensional pallet or any target object.
Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Penghui (CN212669124U provided translation) in view of Dammeyer (US 5738187).
Re Claim 5, Penghui discloses the expected pose of the target/payload and forklift/robot (Page 6, “The radar data is processed by Scan Match to obtain the state estimation between the respective frame and the frame, and the error estimate and variance of the respective state estimation are obtained at the same time; the lidar data is time synchronized with the roulette encoder odometer data, and the elapsed time is synchronized Kalman filtering is performed on the data to obtain the optimal state estimation, which makes the estimation of the forklift's pose and state more accurate.”) but does not explicitly disclose wherein the object segmentation system generates at least one second region between forks of the robot and outriggers of the robot based on the expected pose of the payload and an expected pose of the robot. However, Dammeyer teaches a forklift with outriggers and a camera that can position itself to see the area below the forks when they are elevated (Col 4, Line 55-57, “Forward of the body 20 are outriggers 35 carrying front support wheels 37.”; Col. 1, Line 56-64, “This invention also includes a camera, which is equipped with a horizontal plane reticle and mounted on a vertically movable carriage assembly and which is protected from damage and contact with the floor when the forks are in their lowermost position. The camera is lowered to a first predetermined position below the forks and load when the forks are raised, which provides the camera with a view that is optimum for viewing a target for vertical height position of the forks or load.”). Thus, it would be obvious to a person of ordinary skill in the art at the time of the effective filing of the application to modify Penghui’s robot with Dammeyer’s teachings of outriggers and cameras to improve its stability and to allow observation of the space beneath the forks when they are elevated and/or loaded with a pallet in order to establish a second region for the forklift’s object segmentation system, and prevent Penghui’s depth camera from being obstructed by a loaded pallet or from being damaged when near the floor.
Re Claim 15, Penghui discloses the expected pose of the target/payload and forklift/robot (Page 6, “The radar data is processed by Scan Match to obtain the state estimation between the respective frame and the frame, and the error estimate and variance of the respective state estimation are obtained at the same time; the lidar data is time synchronized with the roulette encoder odometer data, and the elapsed time is synchronized Kalman filtering is performed on the data to obtain the optimal state estimation, which makes the estimation of the forklift's pose and state more accurate.”) but does not explicitly disclose including the object segmentation system generating at least one second region between forks of the robot and outriggers of the robot based on the expected pose of the payload and an expected pose of the robot. However, Dammeyer teaches a forklift with outriggers and a camera that can position itself to see the area below the forks when they are elevated (Col 4, Line 55-57, “Forward of the body 20 are outriggers 35 carrying front support wheels 37.”; Col. 1, Line 56-64, “This invention also includes a camera, which is equipped with a horizontal plane reticle and mounted on a vertically movable carriage assembly and which is protected from damage and contact with the floor when the forks are in their lowermost position. The camera is lowered to a first predetermined position below the forks and load when the forks are raised, which provides the camera with a view that is optimum for viewing a target for vertical height position of the forks or load.”). Thus, it would be obvious to a person of ordinary skill in the art at the time of the effective filing of the application to modify Penghui’s method with Dammeyer’s teachings of outriggers and cameras to improve its stability and to allow observation of the space beneath the forks when they are elevated and/or loaded with a pallet in order to establish a second region for the forklift’s object segmentation system, and prevent Penghui’s depth camera from being obstructed by a loaded pallet or from being damaged when near the floor.
Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Penghui (CN212669124U provided translation) in view of Wei (CN111126269A provided translation) and Dammeyer (US 5738187).
Re Claim 6, Penghui does not explicitly disclose wherein the at least one second region is an at least one three-dimensional box. However, Wei teaches a method of creating a three-dimensional box region about a targeted object (Page 1, “…a three-dimensional target detection method is provided, including: setting a first coordinate center of a target object in a monocular image as a second coordinate center of a 3D bounding box of the target object; acquiring an acquisition location”). Thus, it would be obvious to a person of ordinary skill in the art at the time of the effective filing of the application to modify Penghui’s robot with Wei’s teaching of target box bounding because it would provide the robot with a more efficient manner of path planning for a three-dimensional pallet or any target object.
Re Claim 16, Penghui does not explicitly disclose wherein the at least one second region is an at least one three-dimensional box. However, Wei teaches a method of creating a three-dimensional box region about a targeted object (Page 1, “…a three-dimensional target detection method is provided, including: setting a first coordinate center of a target object in a monocular image as a second coordinate center of a 3D bounding box of the target object; acquiring an acquisition location”). Thus, it would be obvious to a person of ordinary skill in the art at the time of the effective filing of the application to modify Penghui’s robot with Wei’s teaching of target box bounding because it would provide the robot with a more efficient manner of path planning for a three-dimensional pallet or target object.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW KILLIAN PEPPER whose telephone number is (571)272-6815. The examiner can normally be reached Monday - Friday 7:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached at (571) 270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.K.P./Examiner, Art Unit 3657 /ABBY LIN/Supervisory Patent Examiner, Art Unit 3657