DETAILED ACTION
Receipt of applicant’s amendment to the claims 5/20/2025 is acknowledged. Claims 1 – 7 are currently pending in the case and are addressed on the merits below.
Response to Arguments
Applicant’s arguments/remarks filed 5/20/2025 have been carefully considered. Applicant’s amendments have introduced a new claim combination. Applicant argues that US2018/0161978 Naitou fails to disclose all of the feature of the newly amendment claim. Applicant’s argument is moot as applicant’s amendments introduced new limitations that have changed the scope of the claim and necessitating a new grounds of rejection which is set forth below and which does not rely upon Naitou for the features in question.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1 and 2 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by JP2021133467 Okada.
With regard to claim 1, Okada discloses:
A robot control system for controlling a robot (fig. 2 item 1, [0017]) when the robot handles an article (fig. 5 item 41), the robot control system comprising:
a placement area in which the article is to be placed (fig. 2 item 33, [0022] “a placement position 33”);
an information providing part (fig. 2 item 32, [0039] “the landmark 32 includes information indicating the position of the placement position”) comprising one of: a QR code; and an AR marker (see [0026] “The landmarks 32 may be one or more alignment marks, bar codes, or two-dimensional codes.” Examiner notes that under broadest reasonable interpretation, the noted citation reads at least on “AR marker” required by the claim), provided on one of a robot unit including the robot and the placement area (see fig. 2 showing landmark 32 on the surface in which the placement are 33 is located), and configured to provide information on handling of the article by the robot, the information comprising work step information indicating at least one of: a pickup position of the article on the placement area where the article is picked up by the robot; and a placement position of the article on the placement area where the article is placed by the robot (see [0039] “landmark includes information indicating the position of the placement position 33”);
an information acquisition part (fig. 3 camera 6), provided the other one of the robot unit and the placement area (fig. 3 showing camera (item 6) on the robot rather then the placement area where the landmark (item 32) is located), and configured to acquire the information by scanning the one of: the QR code: and the AR marker of the information providing part (see [0039] “the camera 6 photographs the landmark 32); and
a control device configured to control the robot when the robot handles the article, based on the information acquired by the information acquisition part (see [0041] “The placement control unit 14 controls the position and posture of the arm 3 based on the results read by the reading control unit so that the transported object 41 is positioned on the placement position 33”).
With regard to claim 3, Okada discloses:
the robot control system according to claim 1, wherein the robot comprises a robot arm configured to transport the article (see fig. 2 item 3; [0006] describing the manipulator having and end effector for gripping the object),
the robot unit comprises the robot and a mobile platform on which the robot is mounted (see fig. 2 item 1; [0006] describing a mobile robot (item 2) with an arm (item 3) attached),
the information acquisition part comprises an imaging device (camera item 6) that configured to acquire the information by capturing an image of the one of: the QR code: and the AR marker of the information providing part (see [0039] “the camera 6 photographs the landmark 32 … thereafter, the mobile manipulator 1 places the transported object at the placement position 33 corresponding to the imaged landmark 32.”), and
the control device is configured to recognize a relative position of the robot unit to the placement area, based on the image of the one of: the QR code; and the AR marker of the information providing part captured by the imaging device (see [0028] “the camera 6 has a function of reading the landmarks 32 and outputs the reading results (captured image) to the reading control unit 12”; see also [0039] “The reading control unit 12 calculates and detects the position and orientation of the mobile manipulator 1 relative to the placement position 33 based on the landmarks 32 included in the captured image” ), and control an action of the robot arm in accordance with the relative position (see [0039] “The amount of deviation from the placement position 33 at this time is transmitted to the rotation control unit 13 and the placement control unit 14 and used for correction.”; see also [0041] “the placement control unit 14 controls the position and posture of the arm 3 based on the results read by the reading control unit…”).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over JP2021133467 Okada in view of US 2018/0161978 Naitou.
With regard to claim 2, Okada discloses all of the elements of claim 1 as discussed above. Additionally, Okada also discloses:
wherein the robot comprises a robot arm configured to transport the article (see fig. 2 item 3; [0006] describing the manipulator having and end effector for gripping the object).
Okada does not disclose, but Naitou does disclose:
the information further comprises information on a position of an obstacle relative to the placement area (see fig. 1 and 2; see also [0007], [0017] discussing setting an interference region for an obstacle based on coordinate system that is set based on a shape feature (marker) detected in an image; [0019] discussing that the shape feature is a marker provide in the environment; [0022] discussing that the interference region is set based in part on the reference coordinate system which is calculated based at least in part on the image of the shape feature (marker). ) and
the control device is configured to control, when it is determined that the obstacle is present based on the information on the position of the obstacle, a trajectory of the robot arm in such a manner as to avoid the obstacle (see [0022] “The robot controller 28 can then command the mobile robot 12 to operate so as to avoid the set interference region”).
Therefore, prior the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to modify Okada to incorporate the storage of obstacle information and generation of interference regions as suggested by Naitou. Specifically, it would have been obvious to further determine the location of known obstacles based on the scanned information providing part and to generate an interference region and control the robot based thereon. Such a modification would allow for a means to communicate and determine the location of obstacles relative to the robot and to control the robot so as to avoid collisions with said obstacles.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over JP2021133467 Okada in view of US 6,349,245 Finlay.
With regard to claim 4, Okada discloses:
wherein the control device is configured to recognize the relative position of the robot unit to the placement area by utilizing the image of the one of: the QR code; and the AR marker, captured by the imaging device (see [0028] “the camera 6 has a function of reading the landmarks 32 and outputs the reading results (captured image) to the reading control unit 12”; see also [0039] “The reading control unit 12 calculates and detects the position and orientation of the mobile manipulator 1 relative to the placement position 33 based on the landmarks 32 included in the captured image”).
However, Okada does not disclose details of how the images are utilized to recognize the relative position of the robot. Therefore, Okada does not disclose but Finaly does disclose:
comparing the image (fig. 3 item 16 showing comparing current images with stored images) of the one of: the QR code; and the AR marker, captured by the imaging device, with a pre-stored image (fig. 3 item 5 showing prestored images) of the one of the QR code and the AR marker that is captured in advance by the imaging device when the mobile platform is in a prescribed position relative to the placement area (col. 3 lines 12 – 30 and 41 - 50 disclosing placing the robot in a first state (position) and additional states and positions and comparing the image and position data to register the robot; see also col 4 lines 63 – col 5 line 4, lines 15 -21).
Therefore, prior to the effective filing data of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Okada’s generic registration process in light of the specifics of Finlay’s registration process, such as pre-storing images of the QR code or AR marker from proscribed / know positions of the robot to be utilized for comparison with the current image of the QR code or AR marker. Such a modification would provide a specific means for determining the relative position of the robot.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over JP2021133467 Okada in view of US 2022/0079694 Freiin Von Kapre et al.
With regard to claim 5, Okada does not disclose:
wherein the information further comprises information on a size and orientation of the placement area.
However, as discussed above Okada does teach scanning of a marker or code inorder to acquire information (fig. 2 item 32, [0039] “the landmark 32 includes information indicating the position of the placement position”; see [0041] “The placement control unit 14 controls the position and posture of the arm 3 based on the results read by the reading control unit so that the transported object 41 is positioned on the placement position 33”).
In addition Freiin Von Kapri teaches wherein the information further comprises information on a size and orientation of the placement area (see [0069] teaching a robot which acquire information regarding characteristics of the workplace such as orientation and/or sizes of the workspace.).
Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the acquired information of Freiin Von Kapri with the information acquired by scanning the QR code or AR marker of Okada. Such a modification would have yielded predictable results, specifically such that scanning of the QR code / AR marker would have resulted in the acquiring of information ont eh workspace such as size and/or orientation.
Claims 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over JP2021133467 Okada in view of US2023/0152780 Akazawa et al.
With regard to claim 6, Okada discloses all of the elements of claim 1 as discussed above.
Okada is silent with regard to, but Akazawa teaches:
wherein the information further comprises information on a collaboration task to be performed by a worker in the vicinity of the placement area (see fig. 2 showing robot 20A and 20C with code readers 27A and 27C respectively; see fig. 5 specifically S31, S32, S36, S61, S62 and S66 wherein the code reader reads information utilized for continuing the collaborative work process; See figs. 6 and 7 showing both base identification information and identification information which is read to determine appropriate processing of work item in the collaborative process; see also [0006], [0018], [0029], [0033], [0052] – [0055], [0058], [0063] – [0064], [0066] discussing a collaborative robot system in which steps of the human / worker and the collaborative robots are determined in part based on the identification information read by the code reader; see also [0070] – [0071] discussing when the code read information shows the workpiece is incorrect, the robot stops to allow the worker to make a correction).
Therefore, prior to the effective filing data of the invention, it would have been obvious to one of ordinary skill in the art to modify Okada such that encode data related to a collaborative task in the information read from the marker. Such a modification would extend the usability of the system of Okada to collaborative tasks by allowing the system to determine steps to be performed in the collaborative process.
With regard to claim 7, Okada discloses:
wherein the robot unit includes a first robot unit (fig. 2 item 1, [0017]) fig. 2 item 33, [0022] “a placement position 33”) fig. 2 item 32, [0039] “the landmark 32 includes information indicating the position of the placement position”) fig. 3 camera 6)
the article is transported to the first placement area by the first robot unit (see [0024] discussing area 31 for temporary placement of the article on the mobile robot),
the first information providing part is provided on the first placement area (see fig. 2 showing landmark 32 on the surface in which the placement are 33 is located),
the first information acquisition part is provided on the first robot unit so as to acquire information from the first information providing part (fig. 3 camera 6; see [0039] “the camera 6 photographs the landmark 32),
However, Okada does not disclose a second robot and placement area.
Akazawa discloses using plural robots in a collaborative environment in which the article being worked on is transported between placement/work areas (see fig. 1 illustrating a plurality of robots working on an article at proscribed position which is transported between work areas [0020]; [0021]).
Therefore, prior to the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to modify Okada by adding in additional robots and placement areas of Okada Such a modification would allow for implementation of a collaborative work process in which a workpiece / article can be moved between placement areas to be processed.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADAM R MOTT whose telephone number is (571)270-5376. The examiner can normally be reached M-F 9 - 5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Trammell can be reached at (571) 272-6712. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657