DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 21-30 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 21 recites, “ a method of calibrating robot”, but there no calibration of the robot in the claims, rather the claims calls for detecting objects and modifying a sensor posture the robot, but there is no calibration as called for in the claims. As such the claims fails to particularly point out and distinctly claim the subject matter of the claim.
The rest of the claims are rejected for depending on a rejected base claim.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 21, 22, 24-27, 30-32, 34-37, 40 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kim (US Pub 2011/0182476).
Regarding claim 21, Kim discloses a method of calibrating a robot (performing calibration by robot 100; fig, 1; abstract; sec 0009-0012, 0031, 0037) comprising a plurality of sensors (composite sensors in unit 101; fig. 1; sec 0027, 0028), the method comprising:
detecting a reference object via the plurality of sensors (composite sensors in unit 101; fig. 1; sec 0009, 0027, 0028, 0031) of the robot during driving of the robot along a path according to a local path plan (path plan to travel to a particular location; sec 0033); and
based on a position of the reference object sensed by a specific sensor of the plurality of sensors, modifying a sensor posture parameter value (modify sensor posture parameter value by transforming coordinate point values of a sensor posture parameter; sec 0031, 0037-0039; fig. 3) for each of one or more remaining sensors among the plurality of sensors (composite sensors in unit 101; fig. 1; sec 0027, 0028).
Regarding claim 22, Kim discloses the method of claim 21, wherein modifying the sensor posture parameter value comprises modifying the sensor posture parameter value so that a position of the reference object sensed by each of the one or more remaining sensors becomes same as the position of the reference object sensed by the specific sensor (transforming sensor posture parameter values of the remaining sensors to match sensor posture parameter values of the specific sensor (sec 0037-0039; fig. 3).
Regarding claim 24, Kim discloses the method of claim 21, further comprising estimating postures of the plurality of sensors based on the position of the reference object sensed by the plurality of sensors (figs. 1, 2; sec 0009, 0027, 0028, 0031).
Regarding claim 25, Kim discloses the method of claim 21, further comprising determining an identity of the reference object detected via the plurality of sensors, wherein modifying the sensor posture parameter value is performed based on admitting the determined identity of the reference object (figs. 5A-F; sec 0055-0061).
Regarding claim 26, Kim discloses the method of claim 21, wherein the reference object is a corner at which two planes meet or a cylinder (figs. 5A-F; sec 0055-0061).
Regarding claim 27, Kim discloses the method of claim 26, wherein the reference object is located at an area spaced apart from the robot by a prescribed distance so as to be detectable by all of the plurality of sensors (sec 0009, 0027, 0028, 0031).
Regarding claim 30, Kim discloses the method of claim 21, further comprising modifying the sensor posture parameter value for each of the plurality of sensors based on an average position of the reference object sensed by the plurality of sensors (composite sensors in unit 101; fig. 1; sec 0009, 0027, 0028, 0031).
Regarding claim 31, Kim discloses a robot comprising:
a driving assembly (103; fig. 1) configured to move the robot (sec 0006, 0026);
a plurality of sensors (composite sensors in unit 101; fig. 1; sec 0009, 0027, 0028, 0031); and
a processor configured to:
detect a reference object via the plurality of sensors during driving of the robot along a path according to a local path plan (composite sensors in unit 101; fig. 1; sec 0009, 0027, 0028, 0031); and
based on a position of the reference object sensed by a specific sensor of the plurality of sensors, modify a sensor posture parameter value for each of one or more remaining sensors of the plurality of sensors.
Regarding claim 32, Kim discloses the robot of claim 31, wherein the processor is further configured to modify the sensor posture parameter value (modify sensor posture parameter value by transforming coordinate point values of a sensor posture parameter; sec 0031, 0037-0039; fig. 3) so that a position of the reference object sensed by each of the one or more remaining sensors becomes same as the position of the reference object sensed by the specific sensor (abstract; sec 0009-0012, 0031, 0037).
Regarding claim 34, Kim discloses the robot of claim 31, wherein the processor is further configured to estimate postures of the plurality of sensors based on the position of the reference object sensed by the plurality of sensors (figs. 1, 2; sec 0009, 0027, 0028, 0031).
Regarding claim 35, Kim discloses the robot of claim 31, wherein the processor is further configured to determine an identity of the reference object detected via the plurality of sensors and modify the sensor posture parameter value based on admitting the determined identity of the reference object (figs. 5A-F; sec 0055-0061).
Regarding claim 36, Kim discloses the robot of claim 31, wherein the reference object is a corner at which two planes meet or a cylinder (figs. 5A-F; sec 0055-0061).
Regarding claim 37, Kim discloses the robot of claim 36, wherein the reference object is located at an area spaced apart from the robot by a prescribed distance so as to be detectable by all of the plurality of sensors (figs. 5A-F; sec 0055-0061).
Regarding claim 40, Kim discloses the robot of claim 31, wherein the processor is further configured to modify the sensor posture parameter value for each of the plurality of sensors based on an average position of the reference object sensed by the plurality of sensors (composite sensors in unit 101; fig. 1; sec 0009, 0027, 0028, 0031).
.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 23, 28, 29, 33, 38, 39 are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US Pub 2011/0182476) in view of Zhang (US Pub 2020/0192388).
Regarding claim 23, Kim discloses the method of claim 22, further comprising the driving of the robot based on a difference between the sensor posture parameter value before the modification and the sensor posture parameter value after the modification being outside of a preset threshold range. Kim did particularly recite stopping driving of the robot. However, Zhang BAI, teaches of a robot that calibrates its sensors (sec 0406, 0406) and stops before and after sensing a collision (sec 0338, 0396; figs 25, 26).
Therefore it would be obvious to one having ordinary skill in the art the time the invention was filed to modify Kim to stop the robot when a collision is eminent as taught by Zhang. After Kim is modified by Zhang Kim will be able stop driving of the robot when a collision is eminent and the collision can be eminent when there is a sensed difference between the sensor posture parameter value before the modification and the sensor posture parameter value after the modification being outside of a preset threshold range.
Regarding claim 28, Kim discloses the method of claim 21, wherein the plurality of sensors comprises:
a first three-dimensional (3D) camera for sensing an object located in front (sec 0027-0030).
Kim indicates implementing a composite sensor in does not limit the types of sensors implemented. However, Zhang teaches of a method, wherein a plurality of sensors comprise:
a first red-green-blue (RGB) camera and a first three-dimensional (3D) camera for sensing an object located in front (sec 0057, 0314);
a second RGB camera and a second 3D camera for sensing an object located below (sec 0057, 0314); and
a laser scanner for sensing an object positioned in front (sec 0057, 0314).
Therefore, it would be obvious to one having ordinary skill in the art the time the invention was filed to modify Kim to have include several types of sensors including RGB, Laser as taught by Zhang for the purpose of improving the mitigation of an eminent collision as taught by Zhang.
Regarding claim 29, Zhang discloses the method of claim 28, wherein the first RGB camera and the first 3D camera are located close to an upper end portion of the robot (sec 0057, 0314), wherein the laser scanner is located close to a lower end portion of the robot, wherein the second RGB camera and the second 3D camera are located between the laser scanner and the first RGB camera and the first 3D camera, and wherein the specific sensor is the laser scanner (sec 0057, 0314).
Regarding claim 33, Kim discloses the robot of claim 32, wherein the processor is further configured to driving of the robot based on a difference between the sensor posture parameter value before the modification and the sensor posture parameter value after the modification being outside of a preset threshold range.
Kim did particularly recite stopping driving of the robot. However, Zhang BAI, teaches of a robot that calibrates its sensors (sec 0406, 0406) and a processor that stops before and after sensing a collision (sec 0338, 0396; figs 25, 26).
Therefore it would be obvious to one having ordinary skill in the art the time the invention was filed to modify Kim to stop the robot when a collision is eminent as taught by Zhang. After Kim is modified by Zhang Kim will be able stop driving of the robot when a collision is eminent and the collision can be eminent when there is a sensed difference between the sensor posture parameter value before the modification and the sensor posture parameter value after the modification being outside of a preset threshold range.
Regarding claim 38, Kim discloses the method of claim 31, wherein the plurality of sensors comprises:
a first three-dimensional (3D) camera for sensing an object located in front (sec 0027-0030).
Kim indicates implementing a composite sensor in does not limit the types of sensors implemented. However, Zhang teaches of a method, wherein a plurality of sensors comprise:
a first red-green-blue (RGB) camera and a first three-dimensional (3D) camera for sensing an object located in front (sec 0057, 0314);
a second RGB camera and a second 3D camera for sensing an object located below (sec 0057, 0314); and
a laser scanner for sensing an object positioned in front (sec 0057, 0314).
Therefore, it would be obvious to one having ordinary skill in the art the time the invention was filed to modify Kim to have include several types of sensors including RGB, Laser as taught by Zhang for the purpose of improving the mitigation of an eminent collision as taught by Zhang.
Regarding claim 39, Zhang discloses the robot of claim 38, wherein the first RGB camera and the first 3D camera are located close to an upper end portion of the robot, wherein the laser scanner is located close to a lower end portion of the robot, wherein the second RGB camera and the second 3D camera are located between the laser scanner and the first RGB camera and the first 3D camera, and wherein the specific sensor is the laser scanner (sec 0057, 0314).
Conclusion
The prior art (LIU, YI (CN 113494916 B) and BAI, Jun-jie (CN 118518817 A) made of record and not relied upon is considered pertinent to applicant's disclosure.
Communication
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RONNIE MANCHO whose telephone number is (571)272-6984. The examiner can normally be reached Mon-Thurs.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached at 571 270 5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RONNIE M MANCHO/Primary Examiner, Art Unit 3657