Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
The Office Action is in response to the application filed 08/27/2024. Claim 1-15 are presently pending and are presented for examination.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 08/27/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a code conversion unit configured to convert a robot program into a code”, “a code analysis unit configured to analyze a captured image of the code”, and “a robot program duplication unit configured to duplicate the restored robot program” in claim 1, “a model arrangement unit configured to arrange” and “a robot program teaching unit configured to perform teaching” in claim 2, “a model arrangement unit configured to arrange”, “a work target designation unit configured to designate a work target”, and “a work program generation unit configured to generate a robot program” in claim 9, and “a code acquisition unit configured to acquire”, “a code analysis unit configured to analyze”, “a robot program duplication unit configured to duplicate”, and “a program execution unit configured to execute” in claim 14.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 7-8, and 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Hall et al. US 20200306981 A1 (“Hall”).
Regarding Claim 1. Hall teaches a robot programming system comprising:
a first information processing device including a code conversion unit configured to convert a robot program into a code;
a visual sensor configured to capture an image of the code displayed on an information medium (Robotic arms shown in FIG. 1 communicate with a controller at 20. The arm at 14 has a QR code attachment device and code reader at 22 for reading QR codes at 12, shown in more detail in FIG. 2. In this embodiment, the construction elements 10 are tagged with QR codes 12 by the QR code attachment device and QR code reader 22. The QR codes are read by the QR code attachment device and QR code reader 22. The controller 20 receives input from the code reader 22 and provides instructions to the robotic arms 16 and 18. The robotic arms use these instructions to manipulate the construction elements 10 into position and to weld them together as per the instructions [paragraph 24]); and
a robot control device configured to control a robot, the robot control device comprising:
a code analysis unit configured to analyze a captured image of the code so as to restore the robot program (According to FIG. 5, after reading the machine readable code at 503, instructions are provided to the device based on the machine readable code at 504, and the construction elements are manipulated based on the instructions. As best can be understood, this means that the robot controller restores the robot program from the code in step 503, provides the instructions to the device in step 504, and manipulates the construction element in step 505, which is a duplication of the robot program in the form of performing the program. This would not be possible without storing the program somewhere with the controller, and duplicating the program for use by the manipulator arms).
Hall does not explicitly teach a robot program duplication unit configured to duplicate the restored robot program and store the duplicated robot program in a storage unit after the robot program is duplicated and restored. However, one understands that this is well known in the art to a person of ordinary skill in the art. The QR codes give instantaneous instructions on what the robot should do. If the task were to be repeated, it is known in the art that simply programming the robot to repeat the process to build an identical cube would be faster than having the robot take the time to read the QR code with each iteration of construction.
Regarding Claim 7. Hall teaches the robot programming system according to claim 1.
Hall also teaches:
wherein the information medium on which the code is displayed is a paper medium (In FIG. 2, in one embodiment, the QR code can be affixed to a construction element with a code sticker at 34 of FIG. 2 [paragraph 26]).
Regarding Claim 8. Hall teaches the robot programming system according to claim 1.
Hall also teaches:
wherein
the first information processing device further includes a robot program division unit configured to divide the robot program into a plurality of robot programs (In FIG. 2, three separate robotic arms are shown at 14, 16, and 18 in communication with a controller at 20 [paragraph 24]. The controller is responsible for providing instructions to the robots based on the QR codes as described in FIG. 5),
the code conversion unit generates a plurality of codes respectively corresponding to the plurality of divided robot programs (If a variety of codes can be printed on different construction elements as shown in FIG. 2, with different instructions to assemble the structure shown in the same figure, and the robotic arms attach the construction element to other construction elements, as per the instructions from the different machine-readable codes of the different construction elements [paragraph 30], then the device that prints the codes must generate a plurality of codes respectively corresponding to the plurality of divided robot programs),
the visual sensor captures an image of the plurality of codes (the code reader described in FIG. 5, step 503), and
the code analysis unit restores the entire robot program from the captured image of the plurality of codes (step 504 of FIG. 5).
Regarding Claim 13. Hall teaches a robot control device configured to control a robot, the robot control device comprising:
a code acquisition unit configured to acquire information about an image of a code encoding a robot program (Robotic arms shown in FIG. 1 communicate with a controller at 20. The arm at 14 has a QR code attachment device and code reader at 22 for reading QR codes at 12, shown in more detail in FIG. 2. In this embodiment, the construction elements 10 are tagged with QR codes 12 by the QR code attachment device and QR code reader 22. The QR codes are read by the QR code attachment device and QR code reader 22. The controller 20 receives input from the code reader 22 and provides instructions to the robotic arms 16 and 18. The robotic arms use these instructions to manipulate the construction elements 10 into position and to weld them together as per the instructions [paragraph 24]), the image being captured by a visual sensor (FIG. 3, number 58 is an example embodiment of the code reader attached to one embodiment of the robot. Examples of the code reader include a QR code reader or an RFID reader, both of which are visual sensors of a type);
a code analysis unit configured to analyze the information about the image of the code so as to restore the robot program (According to FIG. 5, after reading the machine readable code at 503, instructions are provided to the device based on the machine readable code at 504, and the construction elements are manipulated based on the instructions. As best can be understood, this means that the robot controller restores the robot program from the code in step 503, provides the instructions to the device in step 504, and manipulates the construction element in step 505, which is a duplication of the robot program in the form of performing the program. This would not be possible without storing the program somewhere with the controller, and duplicating the program for use by the manipulator arms).
Hall does not explicitly teach a robot program duplication unit configured to duplicate the restored robot program and store the duplicated robot program in a storage unit after the robot program is duplicated and restored. However, one understands that this is well known in the art to a person of ordinary skill in the art. The QR codes give instantaneous instructions on what the robot should do. If the task were to be repeated, it is known in the art that simply programming the robot to repeat the process to build an identical cube would be faster than having the robot take the time to read the QR code with each iteration of construction.
Regarding Claim 14. Hall teaches a robot control device configured to control a robot, the robot control device comprising:
a code acquisition unit configured to acquire information about an image of a code encoding a robot program for performing work depending on a workpiece (Robotic arms shown in FIG. 1 communicate with a controller at 20. The arm at 14 has a QR code attachment device and code reader at 22 for reading QR codes at 12, shown in more detail in FIG. 2. In this embodiment, the construction elements 10 are tagged with QR codes 12 by the QR code attachment device and QR code reader 22. The QR codes are read by the QR code attachment device and QR code reader 22. The controller 20 receives input from the code reader 22 and provides instructions to the robotic arms 16 and 18. The robotic arms use these instructions to manipulate the construction elements 10 into position and to weld them together as per the instructions [paragraph 24]), wherein the code is attached to the workpiece and the image is captured by a visual sensor (FIG. 2 shows how the code can be attached to the workpiece. FIG. 3, number 58 is an example embodiment of the code reader attached to one embodiment of the robot. Examples of the code reader include a QR code reader or an RFID reader, both of which are visual sensors of a type);
a code analysis unit configured to analyze the information about the image of the code so as to restore the robot program (According to FIG. 5, after reading the machine readable code at 503, instructions are provided to the device based on the machine readable code at 504, and the construction elements are manipulated based on the instructions. As best can be understood, this means that the robot controller restores the robot program from the code in step 503, provides the instructions to the device in step 504, and manipulates the construction element in step 505, which is a duplication of the robot program in the form of performing the program. This would not be possible without storing the program somewhere with the controller, and duplicating the program for use by the manipulator arms); and
a program execution unit configured to execute the duplicated robot program (The controller 20 receives input from the code reader 22 and provides instructions to the robotic arms 16 and 18. The robotic arms use these instructions to manipulate the construction elements 10 into position and to weld them together as per the instructions [paragraph 24]).
Hall does not explicitly teach a robot program duplication unit configured to duplicate the restored robot program and store the duplicated robot program in a storage unit after the robot program is duplicated and restored. However, one understands that this is well known in the art to a person of ordinary skill in the art. The QR codes give instantaneous instructions on what the robot should do. If the task were to be repeated, it is known in the art that simply programming the robot to repeat the process to build an identical cube would be faster than having the robot take the time to read the QR code with each iteration of construction.
Regarding Claim 15. Hall teaches the robot control device according to claim 14.
Hall does not explicitly teach:
wherein the program execution unit deletes the duplicated robot program stored in the storage unit after execution of the duplicated robot program with respect to the workpiece is finished.
However, this would be obvious to one of ordinary skill in the art if the robot is intended to assemble a different structure, either with different elements or in a different arrangement, and it is well-known in the art to delete an outdated instruction once it can no longer be used due to changes in the workpieces the robot works with.
Claim(s) 2 and 9-12 are rejected under 35 U.S.C. 103 as being unpatentable over Hall et al. US 20200306981 A1 (“Hall”) as applied to claim 1 above, and further in view of Tokuoka US 20210129331 A1 (“Tokuoka”).
Regarding Claim 2. Hall teaches the robot programming system according to claim 1.
Hall also teaches:
the code conversion unit configured to convert the robot program created by the teaching into a code including information about a command sentence, a motion sentence, and a teaching position of the robot program (This is implied; the QR codes of Hall must contain the information for motion command somehow. For example, in paragraph 31, FIG. 4 including a QR code etcher etching a QR code into an I-beam with a laser and as disclosed in FIG. 5, the system of Hall is capable of gathering the robot program from the QR code, including manipulating the construction element with the device based on the instructions at 505, which necessarily includes motion and position commands).
Hall does not teach:
wherein the first information processing device is a programming device including:
a model arrangement unit configured to arrange, on a virtual space, a robot system model that three-dimensionally expresses a robot system including the robot and includes a robot model;
a robot program teaching unit configured to perform teaching with respect to the robot system model according to a user input; and
the code conversion unit configured to convert the robot program created by the teaching into a code including information about a command sentence, a motion sentence, and a teaching position of the robot program.
However, Tokuoka teaches:
wherein the first information processing device is a programming device including:
a model arrangement unit configured to arrange, on a virtual space, a robot system model that three-dimensionally expresses a robot system including the robot and includes a robot model (FIG. 3, paragraph 41); and
a robot program teaching unit configured to perform teaching with respect to the robot system model according to a user input (The control device in FIG. 1 at 400 is provided with an external input device at 500, which can be a teaching pendant, and is used for an operator to specify a position and orientation of the robot arm main body [paragraph 25]).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Hall with wherein the first information processing device is a programming device including: a model arrangement unit configured to arrange, on a virtual space, a robot system model that three-dimensionally expresses a robot system including the robot and includes a robot model; a robot program teaching unit configured to perform teaching with respect to the robot system model according to a user input; and the code conversion unit configured to convert the robot program created by the teaching into a code including information about a command sentence, a motion sentence, and a teaching position of the robot program as taught by Tokuoka so as to allow a user to observe a model of the robot program and provide input to adjust the program as needed.
Regarding Claim 9. Hall teaches the robot programming system according to claim 1.
Hall also teaches:
the code conversion unit configured to convert the generated robot program into a code including information about a command sentence, a motion sentence, and a teaching position of the robot program (This is implied; the QR codes of Hall must contain the information for motion command somehow. For example, in paragraph 31, FIG. 4 including a QR code etcher etching a QR code into an I-beam with a laser and as disclosed in FIG. 5, the system of Hall is capable of gathering the robot program from the QR code, including manipulating the construction element with the device based on the instructions at 505, which necessarily includes motion and position commands).
Hall does not teach:
wherein the first information processing device is a programming device including:
a model arrangement unit configured to arrange, on a virtual space, a robot system model that three-dimensionally expresses a robot system including the robot and a workpiece and includes a robot model and a workpiece model;
a work target designation unit configured to designate a work target of the workpiece model; and
a work program generation unit configured to generate a robot program for performing work with respect to the designated work target (Hall discusses construction elements as work targets, but is silent as to how they are designated).
However, Tokuoka teaches:
wherein the first information processing device is a programming device including:
a model arrangement unit configured to arrange, on a virtual space, a robot system model that three-dimensionally expresses a robot system including the robot and a workpiece and includes a robot model and a workpiece model (FIG. 3, paragraph 41);
a work target designation unit configured to designate a work target of the workpiece model; and
a work program generation unit configured to generate a robot program for performing work with respect to the designated work target (The imaging device 300 can capture an image in the quadrangular pyramid area indicated by a field of view 13. An appropriate setting of values of the joints J.sub.1 to J.sub.6 enables the imaging device 300 to be moved to an intended three-dimensional position to capture an image of a target object [paragraph 30]. In step S1 of FIG. 1, an operator acquires work information about the robot arm main body 200. The work information consists of position coordinates of all the work points (points A to E) to be captured by the imaging device 300 as the hand of the robot arm main body 200 and information relating to constraints on a pose of the hand of the robot arm main body 200 at each work point. FIG. 5 illustrates an exemplary diagram in which the display 602 displays a screen for an operator to enter the information regarding work [paragraph 43]).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Hall with wherein the first information processing device is a programming device including: a model arrangement unit configured to arrange, on a virtual space, a robot system model that three-dimensionally expresses a robot system including the robot and a workpiece and includes a robot model and a workpiece model; a work target designation unit configured to designate a work target of the workpiece model; and a work program generation unit configured to generate a robot program for performing work with respect to the designated work target as taught by Tokuoka so as to allow a user to observe a model of the robot program and designate work targets directly as needed.
Regarding Claim 10. Hall in combination with Tokuoka teaches the robot programming system according to claim 9.
Hall also teaches:
wherein the information medium on which the code is displayed is a medium on which the code is printed and is attached to a predetermined position on the workpiece (In FIG. 2, in one embodiment, the QR code can be affixed to a construction element with a code sticker at 34 of FIG. 2 [paragraph 26]).
Regarding Claim 11. Hall in combination with Tokuoka teaches the robot programming system according to claim 9.
Hall also teaches:
wherein the robot control device further includes a program execution unit configured to execute the duplicated robot program (The controller 20 receives input from the code reader 22 and provides instructions to the robotic arms 16 and 18. The robotic arms use these instructions to manipulate the construction elements 10 into position and to weld them together as per the instructions [paragraph 24]).
Hall does not explicitly teach:
the program execution unit deletes the duplicated robot program stored in the storage unit after execution of the duplicated robot program with respect to the workpiece is finished.
However, this would be obvious to one of ordinary skill in the art if the robot is intended to assemble a different structure, either with different elements or in a different arrangement, and it is well-known in the art to delete an outdated instruction once it can no longer be used due to changes in the workpieces the robot works with.
Regarding Claim 12. Hall in combination with Tokuoka teaches the robot programming system according to claim 9.
Hall also teaches:
wherein
the robot control device further includes a program execution unit configured to execute the duplicated robot program (The controller 20 receives input from the code reader 22 and provides instructions to the robotic arms 16 and 18. The robotic arms use these instructions to manipulate the construction elements 10 into position and to weld them together as per the instructions [paragraph 24]),
the code includes information about a number of workpieces for which the robot program needs to be executed (Information about how to assemble the construction elements is therefore found on the construction elements. In a preferred embodiment, the machine-readable code becomes a new origin for the robotic assembler to work from. Once the construction elements are manipulated, such as welding them together, the instructions direct the code reader where to find the next machine-readable code on the construction elements. The next machine-readable code thereby becomes the next new origin, and new construction elements are thereby connected [paragraph 18], which reads on including information about a number of workpieces for which the robot program needs to be executed).
Hall does not explicitly teach:
the program execution unit deletes the robot program stored in the storage unit after the program execution unit repeatedly executes the robot program by the number of workpieces.
However, this would be obvious to one of ordinary skill in the art if the robot is intended to assemble a different structure, either with different elements or in a different arrangement, and it is well-known in the art to delete an outdated instruction once it can no longer be used due to changes in the workpieces the robot works with.
Claim(s) 3 is rejected under 35 U.S.C. 103 as being unpatentable over Hall et al. US 20200306981 A1 (“Hall”) as applied to claim 1 above, and further in view of Barr et al. 20210158272 (“Barr”).
Regarding Claim 3. Hall teaches the robot programming system according to claim 1.
Hall also teaches:
wherein the first information processing device is a teaching device configured to teach motion to an actual robot and generate the robot program (this is inherent. The computer-readable codes described in paragraph 18 have to be applied to the construction elements somehow, with one example in paragraph 31, FIG. 4 including a QR code etcher etching a QR code into an I-beam with a laser, and these codes can provide information to (teach) the robot on how to assemble the construction elements).
Additionally, and in the alternative, Barr teaches:
wherein the first information processing device is a teaching device configured to teach motion to an actual robot and generate the robot program (The label service 1424 receives order information through the app service 1414 and uses that information to generate labels including text and barcodes that will be used by carriers for identifying and routing packages. The label service 1424 communicates the labels to a printer 1426 to produce the physical labels that will be affixed to packages [paragraph 179]. This is for a system in which a robot could receive a carton recommendation from a cartonization engine and use that information to retrieve the correct container to package items in a customer order [paragraph 125]).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Hall with wherein the first information processing device is a teaching device configured to teach motion to an actual robot and generate the robot program as taught by Barr because the codes in Hall have to be printed somehow in order for the system of Hall to work.
Claim(s) 4-6 are rejected under 35 U.S.C. 103 as being unpatentable over Hall et al. US 20200306981 A1 (“Hall”) as applied to claim 1 above, and further in view of Gong et al. US 20150134115 A1 (“Gong”).
Regarding Claim 4. Hall teaches the robot programming system according to claim 1.
Hall does not teach:
wherein the information medium on which the code is displayed is a display screen of a second information processing device.
However, Gong teaches:
wherein the information medium on which the code is displayed is a display screen of a second information processing device (A user can provide commands to a mobile robot using a computing device that generates a glyph containing the command and displays the glyph on a display device of the computing device. In some implementations, the computing device 200 transmits the command to a service provider 110, which in turn generates the glyph 202 and transmits the glyph to the computing device 200. The mobile robot 300 captures image data (e.g. a digital photograph) that includes the glyph 202, decodes the glyph 202 to determine the command, and issues a command to one of its resources or components. As used herein, the term glyph 202 can refer to any image that is capable of storing data, including but not limited to barcodes and matrix barcodes. A matrix barcode is a two-dimensional bar code. Matrix bar codes can include quick-response codes ("QR-codes"). The foregoing framework departs from traditional use of QR-codes, which are usually displayed on advertisements or other static mediums and meant to be captured by mobile computing devices (e.g., smartphones 200b and tablets 200a) [paragraph 24]).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Hall with wherein the information medium on which the code is displayed is a display screen of a second information processing device as taught by Gong so as to allow a changeable display screen to communicate the code to the robot and update a new code when a new program is required.
Regarding Claim 5. Hall in combination with Gong teaches the robot programming system according to claim 4.
Hall does not teach:
wherein the first information processing device transfers the code to the second information processing device by an e-mail function.
However, while not explicit, Gong teaches that the bar code can be transferred by smartphone or other mobile computing device to a display device of the computing device [paragraph 24]. E-mail is a common method in the art for transferring such data, and so it would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Hall in combination with Gong with wherein the first information processing device transfers the code to the second information processing device by an e-mail function as this would merely be an application of a known technique to a known device to yield predictable results.
Regarding Claim 6. Hall teaches the robot programming system according to claim 1.
Hall does not teach:
wherein the information medium on which the code is displayed is a display screen of the first information processing device.
However, Gong teaches:
wherein the information medium on which the code is displayed is a display screen of the first information processing device (The computing device used to display the glyph to the robot can be the user device (smartphone or tablet) [paragraph 42]).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Hall with wherein the information medium on which the code is displayed is a display screen of the first information processing device as taught by Gong so as to allow a changeable display screen to communicate the code to the robot and update a new code when a new program is required.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON G CAIN whose telephone number is (571)272-7009. The examiner can normally be reached Monday: 7:30am - 4:30pm EST to Friday 7:30pm - 4:30am.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached at (571) 270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AARON G CAIN/Examiner, Art Unit 3656