DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-3, 6, 8, 11-12, 14-22 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Park (US 2021/0216079 A1).
Regarding claim 1, Park discloses a method for controlling motion of a robot, comprising: obtaining (via Figs. 1-2, element 150) a moving scene image (Figs. 3 and 13) collected by a visual sensor (Fig. 2, element 220, “camera sensor”; paragraphs 0045-0046, 0066-0067, 0071, 0087, 0118-0120), and detecting (Fig. 7, step S3) a target reference object (Figs. 6, 8-10, 12 and 15-16, elements W1, W2, “installed objects”) according to the moving scene image to obtain a detection result (lines) of the target reference object, wherein the target reference object comprises reference objects at two sides (Figs. 13 and 15-16, elements L1-L5, left_bottom_line, R1-R3, right_bottom_line) of a moving track (Fig. 15, element 100a-100e) of a target robot (Figs. 1, 8-11, 14-15, element 100; paragraphs 0041-0043, 0055, 0071, 0078-0080, 0088, 0110); determining (Fig. 7, step S4) an edge line (Fig. 6, element left_line, right_line) of the moving track according to the detection result of the target reference object (paragraphs 0065, 0080); determining positioning information of a target shadow eliminating point (Figs. 9-10, element VP) according to the edge line (paragraphs 0102, 0106); and determining (Fig. 7, steps S6, S14) a motion adjustment parameter of the target robot according to the positioning information (paragraphs 0081-0082), and adjusting (Fig. 7, steps S7-S8 and S15-S16) a motion state of the target robot according to the motion adjustment parameter (paragraphs 0082, 0091).
Regarding claim 2, Park discloses the method of claim 1, wherein detecting the target reference object according to the moving scene image to obtain the detection result of the target reference object comprises: obtaining an edge line segment (Fig. 11, element 221a, 221b) within a preset angle range by scanning the moving scene image, wherein the edge line segment represents an edge (Fig. 11, element 221) of the target reference object (paragraphs 0065, 0109-0111); and obtaining the detection result of the target reference object according to the edge line segment (paragraph 0111). It is inherent that camera sensor has a preset angle range (i.e., a field of view).
Regarding claim 3, Park discloses the method of claim 2, wherein the target reference object is a shelf; and obtaining the edge line segment within the preset angle range by scanning the moving scene image comprises: obtaining a beam boundary line segment (Fig. 13, elements left_bottom_line, right_bottom_line) of the shelf by scanning the shelf within the preset angle range in the moving scene image; and obtaining the detection result of the target reference object according to the beam boundary line segment (paragraphs 0061, 0118-0120).
Regarding claim 6, Park discloses the method of claim 1, wherein determining the motion adjustment parameter of the target robot according to the positioning information comprises: determining (Fig. 7, steps S5, S13) a track direction (left/right) of the moving track according to the positioning information (paragraphs 0080-0083); and determining (Fig. 7, steps S6, S14) the motion adjustment parameter of the target robot according to the track direction (paragraphs 0081-0082).
Regarding claim 8, Park discloses the method of claim 6, wherein determining the motion adjustment parameter of the target robot according to the track direction comprises: determining (Fig. 7, steps S5, S13) a moving direction (left/right) of the target robot according to the moving scene image (paragraphs 0080-0083); and calculating (Fig. 7, steps S6-S8, S14-S16) the motion adjustment parameter of the target robot according to the moving direction and the track direction (paragraphs 0081-0082, 0091).
Regarding claim 11, Park discloses a computing device, comprising: a memory, and a processor (Fig. 1, element 250); wherein the memory is configured to store computer-executable instructions (paragraphs 0050, 0144), and the processor is configured to execute the computer-executable instructions to: obtain a moving scene image (Figs. 3 and 13) collected by a visual sensor (Fig. 2, element 220, “camera sensor”; paragraphs 0045-0046, 0066-0067, 0071, 0087, 0118-0120), and detecting (Fig. 7, step S3) a target reference object (Figs. 6, 8-10, 12 and 15-16, elements W1, W2, “installed objects”) according to the moving scene image to obtain a detection result (lines) of the target reference object, wherein the target reference object comprises reference objects at two sides (Figs. 13 and 15-16, elements L1-L5, left_bottom_line, R1-R3, right_bottom_line) of a moving track (Fig. 15, element 100a-100e) of a target robot (Figs. 1, 8-11, 14-15, element 100; paragraphs 0041-0043, 0055, 0071, 0078-0080, 0088, 0110); determine (Fig. 7, step S4) an edge line (Fig. 6, element left_line, right_line) of the moving track according to the detection result of the target reference object (paragraphs 0065, 0080); determine positioning information of a target shadow eliminating point (Figs. 9-10, element VP) according to the edge line (paragraphs 0102, 0106); and determine (Fig. 7, steps S6, S14) a motion adjustment parameter of the target robot according to the positioning information (paragraphs 0081-0082), and adjust (Fig. 7, steps S7-S8 and S15-S16) a motion state of the target robot according to the motion adjustment parameter (paragraphs 0082, 0091).
Regarding claim 12, Park discloses a non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by a processor (Fig. 1, element 250; paragraphs 0050, 0144), causes the processor to: obtain a moving scene image (Figs. 3 and 13) collected by a visual sensor (Fig. 2, element 220, “camera sensor”; paragraphs 0045-0046, 0066-0067, 0071, 0087, 0118-0120), and detecting (Fig. 7, step S3) a target reference object (Figs. 6, 8-10, 12 and 15-16, elements W1, W2, “installed objects”) according to the moving scene image to obtain a detection result (lines) of the target reference object, wherein the target reference object comprises reference objects at two sides (Figs. 13 and 15-16, elements L1-L5, left_bottom_line, R1-R3, right_bottom_line) of a moving track (Fig. 15, element 100a-100e) of a target robot (Figs. 1, 8-11, 14-15, element 100; paragraphs 0041-0043, 0055, 0071, 0078-0080, 0088, 0110); determine (Fig. 7, step S4) an edge line (Fig. 6, element left_line, right_line) of the moving track according to the detection result of the target reference object (paragraphs 0065, 0080); determine positioning information of a target shadow eliminating point (Figs. 9-10, element VP) according to the edge line (paragraphs 0102, 0106); and determine (Fig. 7, steps S6, S14) a motion adjustment parameter of the target robot according to the positioning information (paragraphs 0081-0082), and adjust (Fig. 7, steps S7-S8 and S15-S16) a motion state of the target robot according to the motion adjustment parameter (paragraphs 0082, 0091).
Regarding claim 14, Park discloses the computing device of claim 11, wherein the processor is further configured to: obtain an edge line segment (Fig. 11, element 221a, 221b) within a preset angle range by scanning the moving scene image, wherein the edge line segment represents an edge (Fig. 11, element 221) of the target reference object (paragraphs 0065, 0109-0111); and obtain the detection result of the target reference object according to the edge line segment (paragraph 0111). It is inherent that camera sensor has a preset angle range (i.e., the camera sensor inherently has a field of view).
Regarding claim 15, Park discloses the computing device of claim 14, wherein the target reference object is a shelf; and the processor is further configured to: obtain a beam boundary line segment (Fig. 13, elements left_bottom_line, right_bottom_line) of the shelf by scanning the shelf within the preset angle range in the moving scene image; and obtain the detection result of the target reference object according to the beam boundary line segment (paragraphs 0061, 0118-0120).
Regarding claim 18, Park discloses the computing device of claim 11, wherein the processor is further configured to: determine (Fig. 7, steps S5, S13) a track direction of the moving track according to the positioning information (paragraphs 0080-0083); and determine (Fig. 7, steps S6, S14) the motion adjustment parameter of the target robot according to the track direction (paragraphs 0081-0082).
Regarding claim 20, Park discloses the computing device of claim 18, wherein the processor is further configured to: determine (Fig. 7, steps S5, S13) a moving direction of the target robot according to the moving scene image (paragraphs 0080-0083); and calculate (Fig. 7, steps S6-S8, S14-S16) the motion adjustment parameter of the target robot according to the moving direction and the track direction (paragraphs 0081-0082, 0091).
Regarding claim 22, Park discloses the non-transitory computer-readable storage medium of claim 12, wherein the processor is further configured to: obtain (Fig. 11, element 221a, 221b) an edge line segment within a preset angle range by scanning the moving scene image, wherein the edge line segment represents an edge (Fig. 11, element 221) of the target reference object (paragraphs 0065, 0109-0111); and obtain the detection result of the target reference object according to the edge line segment (paragraph 0111). It is inherent that camera sensor has a preset angle range (i.e., the camera sensor inherently has a field of view).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 7 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Park as applied to claims 6 and 18 above, and further in view of Mohammad Mirzaei et al. (US 2014/0198227 A1, hereinafter referred to as “Mirzaei”).
Regarding claim 7, Park teaches the method of claim 6. Park is silent regarding determining the track direction of the moving track according to the positioning information comprising: obtaining a target internal reference matrix of the visual sensor; and calculating the track direction of the moving track according to the positioning information and the target internal reference matrix.
Mirzaei teaches a technique for calculating a vanishing point (Fig. 5, element 520 via Fig. 6, step 620) from lines (Fig. 5, element 514, 516) derived from a captured image (Fig. 6, step 610) from a visual sensor (Fig. 1, element 110). The technique includes obtaining a target internal reference matrix (“intrinsic calibration matrix”) of the visual sensor and calculating a track direction (orientation) of a moving track according to positioning information and the target internal reference matrix (paragraphs 0053-0054, 0071, 0075, 0081-0088). It would have been obvious to a person having ordinary skill in the art prior to Applicant’s effective filing date to apply the well-known technique taught by Mirzaei to the prior art method taught by Park. That is, it would have been obvious to configure the step of determining the track direction of the moving track according to the positioning information to further comprise obtaining target internal reference matrix of the visual sensor by applying the well-known technique taught by Mirzaei. Application of the well-known technique to the prior art method taught by Park would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and because such application would have yielded predictable results. The predictable results including: the step of determining the track direction of the moving track according to the positioning information further comprising: obtaining a target internal reference matrix of the visual sensor; and calculating the track direction of the moving track according to the positioning information and the target internal reference matrix.
Regarding claim 19, Park teaches the computing device of claim 18. Park is silent regarding the processor being further configured to: obtain a target internal reference matrix of the visual sensor; and calculate the track direction of the moving track according to the positioning information and the target internal reference matrix.
Mirzaei teaches a technique for calculating a vanishing point (Fig. 5, element 520 via Fig. 6, step 620) from lines (Fig. 5, element 514, 516) derived from a captured image (Fig. 6, step 610) from a visual sensor (Fig. 1, element 110). The technique includes obtaining a target internal reference matrix (“intrinsic calibration matrix”) of the visual sensor and calculating a track direction (orientation) of a moving track according to positioning information and the target internal reference matrix (paragraphs 0053-0054, 0071, 0075, 0081-0088). It would have been obvious to a person having ordinary skill in the art prior to Applicant’s effective filing date to apply the well-known technique taught by Mirzaei to the prior art device taught by Park. That is, it would have been obvious to configure the device taught by Park to obtain a target internal reference matrix of the visual sensor by applying the well-known technique taught by Mirzaei. Application of the well-known technique to the prior art device would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and because such application would have yielded predictable results. The predictable results including: the processor being further configured to: obtain a target internal reference matrix of the visual sensor; and calculate the track direction of the moving track according to the positioning information and the target internal reference matrix.
Claim(s) 9 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Park as applied to claims 1 and 11 above, and further in view of Yi et al. (US 2020/0282566 A1, hereinafter referred to as “Yi”).
Regarding claim 9, Park teaches the method of claim 1. Park is silent regarding detecting the target reference object according to the moving scene image to obtain the detection result of the target reference object further comprising: obtaining a noise-eliminated moving scene image by performing a filtering processing on the moving scene image.
Yi teaches a technique for obtaining (Fig. 3, step S14) a noise-eliminated moving scene image by performing a filtering process on the moving scene image (paragraphs 0059 and 0075). It would have been obvious to a person having ordinary skill in the art prior to Applicant’s effective filing date to apply the well-known technique taught by Yi to the prior art method taught by Park. That is, it would have been obvious to filter the images captured by Park for noise by applying the well-known technique taught by Yi. Application of the well-known technique taught by Yi to the prior art method taught by Park would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and because such application would have yielded predicable results. The predictable results including: obtaining a noise-eliminated moving scene image by performing a filtering processing on the moving scene image.
Regarding claim 21, Park teaches the computing device of claim 11. Park is silent regarding, the processor is further configured to: obtain a noise-eliminated moving scene image by performing a filtering processing on the moving scene image.
Yi teaches a technique for obtaining (Fig. 3, step S14) a noise-eliminated moving scene image by performing a filtering process on the moving scene image (paragraphs 0059 and 0075). It would have been obvious to a person having ordinary skill in the art prior to Applicant’s effective filing date to apply the well-known technique taught by Yi to the prior art device taught by Park. That is, it would have been obvious to configure the processor device taught by Park to noise filter the images captured by the vision sensor by applying the well-known noise-filtering technique taught by Yi. Application of the well-known technique to the prior art device would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and because such application would have yielded predicable results. The predictable results including: the processor being configured to obtain a noise-eliminated moving scene image by performing a filtering processing on the moving scene image.
Allowable Subject Matter
Claims 4-5 and 16-17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DALE MOYER whose telephone number is (571)270-7821. The examiner can normally be reached Monday-Friday 8am-5pm PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi H Tran can be reached at 571-272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Dale Moyer/Primary Examiner, Art Unit 3656