DETAILED ACTION
This is a non-final office action on the merits in response to communications on 3/3/2026. Claims 1-6 are pending and addressed below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 3/3/2026 is being considered by the examiner.
Non-English documents have been considered in as much as the drawings and translated portions provided therein (See MPEP 609).
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, see starting on page 4 of REMARKS, have been considered but are moot because of new amendments thus the arguments do not apply to the current rejections.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nagatsuka et al. (US 20070282485) in view of NEO (JP-2011044046-A a reference in IDS 8/28/2023, provided translation being cited).
Regarding claim 1, Nagatsuka et al. teaches:
A robot simulation device for simulating work performed on a workpiece by a robot in a robot system including the robot, a visual sensor, and the workpiece arranged in a workspace, the robot simulation device comprising:
a processor configured to
arrange a robot model of the robot, a visual sensor model of the visual sensor, and a workpiece model of the workpiece in a virtual space that three-dimensionally expresses the workspace;
(at least figs. 1-5 [0030]-[0061] showed and discussed robot simulation of picking workpieces, robot 22, workpieces 20, imaging means 55 such as a CCD camera, display simulation on screens, three-dimensional virtual space 60, robot model 22', workpiece models 40, camera model 55', in particular fig. 4 [0041]-[0061] showed the screen layout of the simulation and the parts, discussed “Although the three-dimensional virtual space 60 is plotted as a plane in FIG. 4, the viewpoints of the three-dimensional virtual space 60 can be three-dimensionally changed by use of an input/output device, such as a mouse”);
calculate a position and a posture of the workpiece model with reference to the robot model or the visual sensor model in the virtual space by superimposing a shape feature of the workpiece model on three-dimensional position information about the workpiece with reference to the robot or the visual sensor, and
execute a simulation operation of measuring the workpiece model by the visual sensor model and causing the robot model to perform work on the workpiece model, wherein
the processor is configured to arrange, in the virtual space, the workpiece model in the calculate position and the calculated posture with reference to the robot model or the visual sensor model so that an actual arrangement of the workpiece in the workspace is reproduced in the virtual space.
(at least figs. 1-5 [0030]-[0061] showed and discussed robot simulation of picking workpieces, robot 22, workpieces 20, imaging means 55 such as a CCD camera, display simulation on screens, three-dimensional virtual space 60, robot model 22', workpiece models 40, camera model 55', in particular fig. 4 [0041]-[0061] showed the screen layout of the simulation and the parts, discussed “Although the three-dimensional virtual space 60 is plotted as a plane in FIG. 4, the viewpoints of the three-dimensional virtual space 60 can be three-dimensionally changed by use of an input/output device, such as a mouse”; at least [0051]-[0053] discussed “camera model 55' can acquire the virtual image of the workpiece models 40 in the visual field 56' through a virtual camera means 33. The virtual camera means 33 displays the acquired virtual image as a second screen 52 on the display unit 19”, discussed “correcting means 34 first selects an appropriate workpiece model 40 such as a workpiece model 40a from the virtual image on the second screen 52 and calculates the posture and position”; it can be that screen 2 has shape, position and posture, and measuring/distance of workpiece model; these teachings read on calculate a position and a posture of the workpiece model with reference to the robot model or the visual sensor model in the virtual space by superimposing a shape feature of the workpiece model on three-dimensional position information about the workpiece, and read on simulation of measuring workpiece model by visual sensor model, [0054]-[0061]discussed simulation of picking up; [0059]-[0061] [0071]-[0079] discussed “matching the number of the workpiece models 40 with the number of the actual workpieces 20”, discussed arranging workpiece models 40 based on estimated postures of workpieces 20, in particular [0074]);
Nagatsuka et al. does not explicitly teach:
the three-dimensional position information about the workpiece being actually measured by the visual sensor in the workspace;
However, NEO teaches:
the three-dimensional position information about the workpiece being actually measured by the visual sensor in the workspace;
(at least figs. 1-4 [0021]-[0052] discussed position/orientation of object being recognized, vision sensor 20) for object recognition ([0021]-[0052])
It would have been obvious to one of ordinary skill in the art at the time of filing and at the time of the invention to modify the system and method of Nagatsuka et al. with the three-dimensional position information about the workpiece being actually measured by the visual sensor in the workspace as taught by NEO for object recognition.
Regarding claim 2, Nagatsuka et al. teaches:
wherein the three-dimensional position information about the workpiece in the workspace includes three-dimensional position information about all workpieces including the workpiece loaded in bulk in the workspace (at least figs. 1-5 [0030]-[0061] showed and discussed robot simulation of picking workpieces, robot 22, workpieces 20, imaging means 55 such as a CCD camera, display simulation on screens, three-dimensional virtual space 60, robot model 22', workpiece models 40, camera model 55', in particular fig. 4 [0041]-[0061] showed the screen layout of the simulation and the parts, discussed “Although the three-dimensional virtual space 60 is plotted as a plane in FIG. 4, the viewpoints of the three-dimensional virtual space 60 can be three-dimensionally changed by use of an input/output device, such as a mouse”; at least [0051]-[0053] discussed “camera model 55' can acquire the virtual image of the workpiece models 40 in the visual field 56' through a virtual camera means 33. The virtual camera means 33 displays the acquired virtual image as a second screen 52 on the display unit 19”, discussed “correcting means 34 first selects an appropriate workpiece model 40 such as a workpiece model 40a from the virtual image on the second screen 52 and calculates the posture and position”; it can be that screen 2 has shape, position and posture, and measuring/distance of workpiece model; these teachings read on calculate a position and a posture of the workpiece model with reference to the robot model or the visual sensor model in the virtual space by superimposing a shape feature of the workpiece model on three-dimensional position information about the workpiece, and read on simulation of measuring workpiece model by visual sensor model, [0054]-[0061]discussed simulation of picking up);
Nagatsuka et al. does not explicitly teach:
three-dimensional position information being actually measured by the visual sensor;
However, NEO teaches:
three-dimensional position information being actually measured by the visual sensor;
(at least figs. 1-4 [0021]-[0052] discussed position/orientation of object being recognized, vision sensor 20) for object recognition ([0021]-[0052])
It would have been obvious to one of ordinary skill in the art at the time of filing and at the time of the invention to modify the system and method of Nagatsuka et al. with three-dimensional position information being actually measured by the visual sensor as taught by NEO for object recognition.
Regarding claim 3, Nagatsuka et al. teaches:
wherein the three-dimensional position information about the all workpieces is a set of three-dimensional points of the all workpieces (at least figs. 1-5 [0030]-[0061] showed and discussed robot simulation of picking workpieces, robot 22, workpieces 20, imaging means 55 such as a CCD camera, display simulation on screens, three-dimensional virtual space 60, robot model 22', workpiece models 40, camera model 55', in particular fig. 4 [0041]-[0061] showed the screen layout of the simulation and the parts, discussed “Although the three-dimensional virtual space 60 is plotted as a plane in FIG. 4, the viewpoints of the three-dimensional virtual space 60 can be three-dimensionally changed by use of an input/output device, such as a mouse”; at least [0051]-[0053] discussed “camera model 55' can acquire the virtual image of the workpiece models 40 in the visual field 56' through a virtual camera means 33. The virtual camera means 33 displays the acquired virtual image as a second screen 52 on the display unit 19”, discussed “correcting means 34 first selects an appropriate workpiece model 40 such as a workpiece model 40a from the virtual image on the second screen 52 and calculates the posture and position”; it can be that screen 2 has shape, position and posture, and measuring/distance of workpiece model; these teachings read on calculate a position and a posture of the workpiece model with reference to the robot model or the visual sensor model in the virtual space by superimposing a shape feature of the workpiece model on three-dimensional position information about the workpiece, and read on simulation of measuring workpiece model by visual sensor model, [0054]-[0061]discussed simulation of picking up);
Nagatsuka et al. does not explicitly teach:
actually measured by using the visual sensor;
However, NEO teaches:
actually measured by using the visual sensor;
(at least figs. 1-4 [0021]-[0052] discussed position/orientation of object being recognized, vision sensor 20) for object recognition ([0021]-[0052])
It would have been obvious to one of ordinary skill in the art at the time of filing and at the time of the invention to modify the system and method of Nagatsuka et al. actually measured by using the visual sensor as taught by NEO for object recognition.
Regarding claim 4, Nagatsuka et al. teaches:
the processor is configured to
set a position and a posture of the visual sensor model with reference to the robot model in the virtual space, based on a position and a posture of the visual sensor with reference to the robot in the workspace, and
arrange, in the virtual space, the visual sensor model in the set position and the set posture of the visual sensor model
(at least figs. 1-5 [0030]-[0061] showed and discussed robot simulation of picking workpieces, robot 22, workpieces 20, imaging means 55 such as a CCD camera, display simulation on screens, three-dimensional virtual space 60, robot model 22', workpiece models 40, camera model 55', in particular fig. 4 [0041]-[0061] showed the screen layout of the simulation and the parts, discussed “Although the three-dimensional virtual space 60 is plotted as a plane in FIG. 4, the viewpoints of the three-dimensional virtual space 60 can be three-dimensionally changed by use of an input/output device, such as a mouse”; at least [0051]-[0053] discussed “camera model 55' can acquire the virtual image of the workpiece models 40 in the visual field 56' through a virtual camera means 33. The virtual camera means 33 displays the acquired virtual image as a second screen 52 on the display unit 19”, discussed “correcting means 34 first selects an appropriate workpiece model 40 such as a workpiece model 40a from the virtual image on the second screen 52 and calculates the posture and position”; it can be that screen 2 has shape, position and posture, and measuring/distance of workpiece model; these teachings read on calculate a position and a posture of the workpiece model with reference to the robot model or the visual sensor model in the virtual space by superimposing a shape feature of the workpiece model on three-dimensional position information about the workpiece, and read on simulation of measuring workpiece model by visual sensor model, [0054]-[0061]discussed simulation of picking up; in particular at least [0040]-[0050] discussed positioning of virtual environment correspond to real environment, discussed “a camera model 55' is arranged at a place designated in advance in the three-dimensional virtual space 60 of the first screen 51. This place corresponds to the place of the actual imaging means 55 (FIG. 1)”);
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nagatsuka et al. (US 20070282485) in view of NEO (JP-2011044046-A a reference in IDS 8/28/2023, provided translation being cited), as applied to claim 4 above, and further in view of ATOHIRA et al. (US 20150261899).
Regarding claim 5, Nagatsuka et al. teaches:
the position and the posture of the visual sensor includes the position and the posture of the visual sensor with reference to the robot in the workspace
(at least figs. 1-5 [0030]-[0061] showed and discussed robot simulation of picking workpieces, robot 22, workpieces 20, imaging means 55 such as a CCD camera, display simulation on screens, three-dimensional virtual space 60, robot model 22', workpiece models 40, camera model 55', in particular fig. 4 [0041]-[0061] showed the screen layout of the simulation and the parts, discussed “Although the three-dimensional virtual space 60 is plotted as a plane in FIG. 4, the viewpoints of the three-dimensional virtual space 60 can be three-dimensionally changed by use of an input/output device, such as a mouse”; at least [0051]-[0053] discussed “camera model 55' can acquire the virtual image of the workpiece models 40 in the visual field 56' through a virtual camera means 33. The virtual camera means 33 displays the acquired virtual image as a second screen 52 on the display unit 19”, discussed “correcting means 34 first selects an appropriate workpiece model 40 such as a workpiece model 40a from the virtual image on the second screen 52 and calculates the posture and position”; it can be that screen 2 has shape, position and posture, and measuring/distance of workpiece model; these teachings read on calculate a position and a posture of the workpiece model with reference to the robot model or the visual sensor model in the virtual space by superimposing a shape feature of the workpiece model on three-dimensional position information about the workpiece, and read on simulation of measuring workpiece model by visual sensor model, [0054]-[0061]discussed simulation of picking up; in particular at least [0040]-[0050] discussed positioning of virtual environment correspond to real environment, discussed “a camera model 55' is arranged at a place designated in advance in the three-dimensional virtual space 60 of the first screen 51. This place corresponds to the place of the actual imaging means 55 (FIG. 1)”);
Nagatsuka et al. does not explicitly teach:
wherein the position and the posture of the visual sensor are data included in calibration data acquired by performing calibration of the visual sensor in the workspace;
However, ATOHIRA et al. teaches:
wherein the position and the posture of the visual sensor are data included in calibration data acquired by performing calibration of the visual sensor in the workspace;
(at least [0066]) to determine positions of the camera models in the virtual space ([0066])
It would have been obvious to one of ordinary skill in the art at the time of filing and at the time of the invention to modify the system and method of Nagatsuka et al. with wherein the position and the posture of the visual sensor are data included in calibration data acquired by performing calibration of the visual sensor in the workspace as taught by ATOHIRA et al. to determine positions of the camera models in the virtual space.
Allowable Subject Matter
Claim 6 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BAO LONG T NGUYEN whose telephone number is (571)270-7768. The examiner can normally be reached M-F 8:30-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached at (571) 272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
BAO LONG T. NGUYEN
Examiner
Art Unit 3664
/BAO LONG T NGUYEN/Primary Examiner, Art Unit 3656