Prosecution Insights
Last updated: April 19, 2026
Application No. 18/301,354

INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, ROBOT SYSTEM, ROBOT SYSTEM CONTROL METHOD, ARTICLE MANUFACTURING METHOD USING ROBOT SYSTEM, AND RECORDING MEDIUM

Non-Final OA §102§103
Filed
Apr 17, 2023
Examiner
CAIN, AARON G
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Canon Kabushiki Kaisha
OA Round
3 (Non-Final)
40%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
66%
With Interview

Examiner Intelligence

Grants 40% of resolved cases
40%
Career Allow Rate
52 granted / 130 resolved
-12.0% vs TC avg
Strong +26% interview lift
Without
With
+26.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
42 currently pending
Career history
172
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
57.4%
+17.4% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 130 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/28/2025 has been entered. Response to Arguments Applicant's arguments filed 10/28/2025, regarding the rejection of claims 1-5 and 23-26 under 35 U.S.C. 102(a)(1) as being anticipated by Ito US 20130054025 A1 (“Ito”) have been fully considered but they are not persuasive. Applicant argues that Ito does not teach or suggest measuring points in real space where the robot arm and the target exist and then obtaining shape information of the target in association with position information in real space by using a measurement result and information of predetermined measurement points, as recited in the amended claim 1. However, as discussed in further detail below, Ito does teach these elements, particularly in FIG. 2 and paragraphs 43-44, and so the rejection is maintained. Further, applicant’s arguments that the secondary citations to Ooba and Keshmiri do not cure the deficiencies of Ito is moot, due to the fact that the disclosure of Ito is sufficient to support a rejection of amended claim 1. Similarly, independent claims 23 and 25, which are analogous to claim 1, are also rejected in light of the disclosure of Ito. Additionally, dependent claims 6-22 are also still rejected in light of the disclosure of Ito in combination with Ooba and Keshmiri. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-5, 23-26, and 28 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ito US 20130054025 A1 (“Ito”). Regarding Claim 1. Ito teaches an information processing system comprising: a device that includes a movable unit including a measurement unit configured to measure a shape of an object (A camera is shown at 103 of FIG. 1, and the camera at 103 may be a single-lens camera for obtaining two-dimensional luminance image information, or a distance measuring sensor such as a stereo camera, a TOF sensor, or a laser range finder for obtaining distance information [paragraph 30]. FIG. 11A shows the state where an on-hand camera at 117 is mounted on the hand mechanism of the robot arm illustrated in FIG. 1, and this camera at 117 is capable of the same functionality as the camera at 102 regarding measurement and calculation of a target object’s orientation and position, as shown by the methods in FIGS. 5A-5B and 9A-9B [paragraph 102]); and a simulation unit that performs an operation simulation for the device in a virtual space by using a virtual model (Referring to a Japanese patent, which discloses a master component, in which a simulation component that simulates an installation portion of a second component to be installed on an installation portion of a first component is installed in a position corresponding to a target position, is secured to a pallet, and a camera fixed to the pallet captures an image [paragraph 6]. This is improved upon in the method disclosed in FIGS. 9A and 9B, with FIG. 10 showing an example of the virtual space, including the robot and a target [paragraphs 95-97]), wherein the movable unit moves the measurement unit to predetermined measurement points (The teaching data generation unit 113 set, to the robot control unit 111, the candidate position/orientation of the hand mechanism calculated by the hand position/orientation candidate calculation unit 112 as a target state. The robot control unit 111 controls driving of the robot arm 101 and the hand mechanism 102 so as to be in the target state [paragraph 40]. Since this can be combined with the camera at 117 mounted on the robot arm, this reads on moving the measurement unit to predetermined measurement points), wherein the measurement unit measures a target at the predetermined measurement points in real space where the device and the target exist, and the measurement unit obtains shape information of the target in association with position information in real space by using a measurement result and information of the predetermined measurement points (FIG. 2 shows an example of a geodesic dome with a basic shape of a regular icosahedron. The gravity center of a target object in a prescribed orientation is arranged so as to agree with the center point of the regular icosahedron, and for example, appearances of the target object seen from viewpoints that are the center points of the vertexes and triangle surface elements are defined as the representative orientations [paragraph 43], which reads on the measurement unit obtaining shape information of the target in association with position information in real space by using a measurement result and information of the predetermined measurement points, and this process is built upon to better assemble the image and position information [paragraph 44]), wherein the simulation unit constructs the virtual model in the virtual space, which corresponds to a position and a shape of the target in the real space, based on the shape information and the position information (In FIG. 10, data on the operation environment including the installation target component 1002 in a state of being installed, the installation recipient component 1003 arranged in a prescribed position, the component supply tray 1004, the working table 1005, and the like is modeled using a combination of cuboids fixed in the virtual space [paragraph 98]. FIG. 10 explicitly shows the position and shape of the target in real space, based on shape and position information, which are discussed in further detail in FIG. 6, at S603, wherein the component position/orientation detection processing unit performs the processing for fitting with the component shape data [paragraph 84]). Regarding Claim 2. Ito teaches the information processing system according to claim 1. Ito also teaches: wherein the information regarding the predetermined measurement points includes information regarding a position and a measurement direction of the measurement unit based on a position of the device (The robot is capable of calculating the hand position/orientation, and this information is used in determining the position of the target component [paragraph 13]). Regarding Claim 3. Ito teaches the information processing system according to claim 1. Ito also teaches: wherein the predetermined measurement points are registered based on setting information input by an operator (Ito incorporates by reference Japanese Patent No. 04167954, a user who performs a teaching operation specifies a target point on an image captured by an arm camera. The user then specifies a spatial position of the target point by moving the arm camera based on the position information and specifying the same target point on a newly captured image [paragraph 5]). Regarding Claim 4. Ito teaches the information processing system according to claim 3. Ito also teaches: wherein the setting information is information input by the operator while operating the device in advance (Ito incorporates by reference Japanese Patent No. 04167954, a user who performs a teaching operation specifies a target point on an image captured by an arm camera. The user then specifies a spatial position of the target point by moving the arm camera based on the position information and specifying the same target point on a newly captured image [paragraph 5]). Regarding Claim 5. Ito teaches the information processing system according to claim 3. Ito also teaches: wherein the setting information includes information regarding a measurement target area including the target (In the Ito invention, the teaching processing is started in a state where the installation target component is installed on the installation recipient component and arranged at a prescribed position in advance by a user. The prescribed position is the same position as the position in which the installation recipient component is arranged when the robot performs the installation operation [paragraph 62]). Regarding Claim 23. Ito teaches an information processing method for performing motion simulation in virtual space using a virtual model, method comprising: obtaining shape information of a target in association with position information in real space by using a measurement result of the target at predetermined measurement points in the real space where the target exists and information of the predetermined measurement points in the real space where the target exists and information of the predetermined measurement points (FIG. 2 shows an example of a geodesic dome with a basic shape of a regular icosahedron. The gravity center of a target object in a prescribed orientation is arranged so as to agree with the center point of the regular icosahedron, and for example, appearances of the target object seen from viewpoints that are the center points of the vertexes and triangle surface elements are defined as the representative orientations [paragraph 43], which reads on the measurement unit obtaining shape information of the target in association with position information in real space by using a measurement result and information of the predetermined measurement points, and this process is built upon to better assemble the image and position information [paragraph 44]. FIG. 10 shows an example of the robot arm, hand mechanism, and the installation target component at 1002, along with the 3-Dimensional coordinate axis [paragraphs 97-98], wherein the model is acquired based on three-dimensional point data), and constructing the virtual model in the virtual space, which corresponds to a position and a shape of the target in the real space, based on the shape information and the position information (In FIG. 10, data on the operation environment including the installation target component 1002 in a state of being installed, the installation recipient component 1003 arranged in a prescribed position, the component supply tray 1004, the working table 1005, and the like is modeled using a combination of cuboids fixed in the virtual space [paragraph 98]. FIG. 10 explicitly shows the position and shape of the target in real space, based on shape and position information, which are discussed in further detail in FIG. 6, at S603, wherein the component position/orientation detection processing unit performs the processing for fitting with the component shape data [paragraph 84]). Regarding Claim 24. Ito teaches a non-transitory computer-readable recording medium recording a program for causing a computer to execute the information processing method according to claim 23 (Ito describes the non-transitory computer-readable storage medium in claim 13, in addition to the disclosure of a medium in paragraph 118). Regarding Claim 25. Ito teaches a robot system comprising: a robot that includes a movable unit including a measurement unit configured to measure a shape of an object (A camera is shown at 103 of FIG. 1, and the camera at 103 may be a single-lens camera for obtaining two-dimensional luminance image information, or a distance measuring sensor such as a stereo camera, a TOF sensor, or a laser range finder for obtaining distance information [paragraph 30]. FIG. 11A shows the state where an on-hand camera at 117 is mounted on the hand mechanism of the robot arm illustrated in FIG. 1, and this camera at 117 is capable of the same functionality as the camera at 102 regarding measurement and calculation of a target object’s orientation and position, as shown by the methods in FIGS. 5A-5B and 9A-9B [paragraph 102]); and a simulation unit that performs an operation simulation for the robot in a virtual space by using a virtual model (Referring to a Japanese patent, which discloses a master component, in which a simulation component that simulates an installation portion of a second component to be installed on an installation portion of a first component is installed in a position corresponding to a target position, is secured to a pallet, and a camera fixed to the pallet captures an image [paragraph 6]. This is improved upon in the method disclosed in FIGS. 9A and 9B, with FIG. 10 showing an example of the virtual space, including the robot and a target [paragraphs 95-97]), wherein the movable unit moves the measurement unit to predetermined measurement points (The teaching data generation unit 113 set, to the robot control unit 111, the candidate position/orientation of the hand mechanism calculated by the hand position/orientation candidate calculation unit 112 as a target state. The robot control unit 111 controls driving of the robot arm 101 and the hand mechanism 102 so as to be in the target state [paragraph 40]. Since this can be combined with the camera at 117 mounted on the robot arm, this reads on moving the measurement unit to a predetermined measurement point), wherein the measurement unit measures a target at the predetermined measurement points in real space where the device and the target exist, and the measurement unit obtains shape information of the target in association with position information in real space by using a measurement result and information of the predetermined measurement points (FIG. 2 shows an example of a geodesic dome with a basic shape of a regular icosahedron. The gravity center of a target object in a prescribed orientation is arranged so as to agree with the center point of the regular icosahedron, and for example, appearances of the target object seen from viewpoints that are the center points of the vertexes and triangle surface elements are defined as the representative orientations [paragraph 43], which reads on the measurement unit obtaining shape information of the target in association with position information in real space by using a measurement result and information of the predetermined measurement points, and this process is built upon to better assemble the image and position information [paragraph 44]), wherein the simulation unit constructs the virtual model in the virtual space, which corresponds to a position and a shape of the target in the real space, based on the shape information and the position information (In FIG. 10, data on the operation environment including the installation target component 1002 in a state of being installed, the installation recipient component 1003 arranged in a prescribed position, the component supply tray 1004, the working table 1005, and the like is modeled using a combination of cuboids fixed in the virtual space [paragraph 98]. FIG. 10 explicitly shows the position and shape of the target in real space, based on shape and position information, which are discussed in further detail in FIG. 6, at S603, wherein the component position/orientation detection processing unit performs the processing for fitting with the component shape data [paragraph 84]). Regarding Claim 26. Ito teaches a robot system control method comprising: creating, by using the robot system according to claim 25, a control program for the robot by performing the operation simulation for the robot in the virtual space (The program recorded on a memory device to perform the functions of the invention is described in paragraph 118. FIG. 10 shows the virtual space). Regarding Claim 28. Ito teaches the information processing system according to claim 1. Ito also teaches: wherein the position information of the target includes a position relative to a reference position for controlling the device in the surrounding environment of the device (the position of a reference point (an object center or the like) of the installation target component 104 in the camera coordinate system is represented as a translation vector Pw from an origin of the camera coordinate system [paragraph 50]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 6-17, 20-21, and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Ito US 20130054025 A1 (“Ito”) as applied to claim 1 above, and further in view of Ooba US 20210064009 A1 (“Ooba”). Regarding Claim 6. Ito teaches the information processing system according to claim 3. Ito does not teach: wherein the setting information includes information of a movement-prohibited area to which the movable unit is prohibited from moving. However, Ooba teaches: wherein the setting information includes information of a movement-prohibited area to which the movable unit is prohibited from moving (the control unit 110 may generate an AR image of a motion restriction area of the robot 300. The motion restriction area is set around an operator, peripheral devices, or the like, and is an area where the motion of the robot 300 is stopped or restricted. With the AR image of the motion restriction area, the user of the augmented reality display device 200 can intuitively recognize a set range of the motion restriction area [paragraph 79]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Ito with wherein the setting information includes information of a movement-prohibited area to which the movable unit is prohibited from moving as taught by Ooba so as to allow the user to input settings for a prohibited area that the robot should avoid, particularly to prevent collisions with obstacles and personnel. Regarding Claim 7. Ito in combination with Ooba teaches the information processing system according to claim 6. Ito does not teach: wherein the movement-prohibited area is settable by a preset virtual model of a peripheral object existing in the surrounding environment. However, Ooba teaches: wherein the movement-prohibited area is settable by a preset virtual model of a peripheral object existing in the surrounding environment (the control unit 110 may generate an AR image of a motion restriction area of the robot 300. The motion restriction area is set around an operator, peripheral devices, or the like, and is an area where the motion of the robot 300 is stopped or restricted. With the AR image of the motion restriction area, the user of the augmented reality display device 200 can intuitively recognize a set range of the motion restriction area [paragraph 79]. Also, the augmented reality display device receives input given by a user through an input unit, which can be used to adjust the regions of the AR image [paragraphs 34-35, 78-79]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Ito with wherein the movement-prohibited area is settable by a preset virtual model of a peripheral object existing in the surrounding environment as taught by Ooba so as to allow the user to input the movement-prohibited area. Regarding Claim 8. Ito teaches the information processing system according to claim 3. Ito does not teach: wherein the setting information includes information regarding number of times of measurement performed by the measurement unit at the predetermined measurement points. However, Ooba teaches: wherein the setting information includes information regarding number of times of measurement performed by the measurement unit at the predetermined measurement points (FIG. 5 is a flowchart illustrating the simulation process of the conveyance simulation device 100. The set of steps in the shown flowchart is repeatedly executed during the simulation process [paragraph 87], including updating positions of the virtual articles, wherein those articles represent items, particularly 71 and 72 in FIG. 3, on virtual lanes that are target articles for a robot shown in FIG. 4. The virtual article management unit 140 sequentially updates the positions of the virtual articles 71, 72 in the conveyance coordinate system based on, for example, the settings of the virtual conveyance unit 120 such as the position and attitude, the virtual conveying velocity, and the virtual lanes 121, 122, and the settings of the virtual article feeding unit 130 such as the feeding schedules [paragraph 66]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Ito with wherein the setting information includes information regarding number of times of measurement performed by the measurement unit at the predetermined measurement points as taught by Ooba so that the system can update the measurements performed by the system, and the updating settings can be input by a user. Regarding Claim 9. Ito teaches the information processing system according to claim 3. Ito does not teach: wherein the setting information and/or the information regarding the predetermined measurement points are displayed on a display unit. However, Ooba teaches: wherein the setting information and/or the information regarding the predetermined measurement points are displayed on a display unit (The display unit 220 of FIG. 1 is a liquid crystal display or the like, and displays the real space image captured by the camera 210 together with AR image data created by the conveyance simulation device 100 [paragraph 31]. Further, as shown in FIG. 4, the control unit 110 may generate an AR image of a reference coordinate system OC that is set with respect to the robot 300, and an AR image of the conveying direction MD of the virtual conveyance unit 120. These AR images allow the user of the augmented reality display device 200 to visually check a relationship between the reference coordinate system OC and the conveying direction MD, and to check whether the conveying direction MD of the virtual conveyance unit 120 is suitable with respect to the settings such as the reference coordinate system OC [paragraph 80]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Ito with wherein the setting information and/or the information regarding the predetermined measurement points are displayed on a display unit as taught by Ooba so as to allow a user to read the setting information. Regarding Claim 10. Ito teaches the information processing system according to claim 1. Ito also teaches: wherein a measurement is performed at the predetermined measurement points, measurement results of the measurement are used to acquire three-dimensional point cloud data including the position information of the target, and the virtual model is constructed based on the three-dimensional point cloud data (FIG. 10 shows an example of the robot arm, hand mechanism, and the installation target component at 1002, along with the 3-Dimensional coordinate axis [paragraphs 97-98], wherein the model is acquired based on three-dimensional point data). Ito does not teach: wherein a plurality of times of measurement is performed, and measurement results of the plurality of times of measurement are synthesized to acquire the three-dimensional data. However, Ooba teaches: wherein a plurality of times of measurement is performed, and measurement results of the plurality of times of measurement are synthesized to acquire the three-dimensional data (FIG. 5 is a flowchart illustrating the simulation process of the conveyance simulation device 100. The set of steps in the shown flowchart is repeatedly executed during the simulation process [paragraph 87], including updating positions of the virtual articles, wherein those articles represent items, particularly 71 and 72 in FIG. 3, on virtual lanes that are target articles for a robot shown in FIG. 4. The virtual article management unit 140 sequentially updates the positions of the virtual articles 71, 72 in the conveyance coordinate system based on, for example, the settings of the virtual conveyance unit 120 such as the position and attitude, the virtual conveying velocity, and the virtual lanes 121, 122, and the settings of the virtual article feeding unit 130 such as the feeding schedules [paragraph 66]. The AR image may be a three-dimensional image [paragraph 83], and can the functions of the simulation device may be implemented using a virtual server function or the like on a cloud [paragraphs 149-151]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Ito with wherein a plurality of times of measurement is performed, and measurement results of the plurality of times of measurement are synthesized to acquire the three-dimensional data as taught by Ooba so as to allow the system to update its position measurements and acquire three-dimensional data accordingly. Regarding Claim 11. Ito teaches the information processing system according to claim 1. Ito does not teach: wherein a plurality of measurement points is registered as the predetermined measurement points in such a way that a measurement range of the measurement unit covers the target. However, Ooba teaches: wherein a plurality of measurement points is registered as the predetermined measurement points in such a way that a measurement range of the measurement unit covers the target (FIG. 5 is a flowchart illustrating the simulation process of the conveyance simulation device 100. The set of steps in the shown flowchart is repeatedly executed during the simulation process [paragraph 87], including updating positions of the virtual articles, wherein those articles represent items, particularly 71 and 72 in FIG. 3, on virtual lanes that are target articles for a robot shown in FIG. 4. The virtual article management unit 140 sequentially updates the positions of the virtual articles 71, 72 in the conveyance coordinate system based on, for example, the settings of the virtual conveyance unit 120 such as the position and attitude, the virtual conveying velocity, and the virtual lanes 121, 122, and the settings of the virtual article feeding unit 130 such as the feeding schedules [paragraph 66]. The AR image may be a three-dimensional image [paragraph 83]. FIG. 3 shows, as random offset areas 131, 132, predetermined ranges which are each set with reference to the virtual article generation position [paragraph 55]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Ito with wherein a plurality of measurement points is registered as the predetermined measurement points in such a way that a measurement range of the measurement unit covers the target as taught by Ooba so as to allow the system to monitor a region of points where the target can be located. Regarding Claim 12. Ito in combination with Ooba teaches the information processing system according to claim 11. Ito does not teach: wherein measurement results obtained at the plurality of measurement points are synthesized to acquire three-dimensional point cloud data including the position information of the target. However, Ooba teaches: wherein measurement results obtained at the plurality of measurement points are synthesized to acquire three-dimensional point cloud data including the position information of the target (FIG. 5 is a flowchart illustrating the simulation process of the conveyance simulation device 100. The set of steps in the shown flowchart is repeatedly executed during the simulation process [paragraph 87], including updating positions of the virtual articles, wherein those articles represent items, particularly 71 and 72 in FIG. 3, on virtual lanes that are target articles for a robot shown in FIG. 4. The virtual article management unit 140 sequentially updates the positions of the virtual articles 71, 72 in the conveyance coordinate system based on, for example, the settings of the virtual conveyance unit 120 such as the position and attitude, the virtual conveying velocity, and the virtual lanes 121, 122, and the settings of the virtual article feeding unit 130 such as the feeding schedules [paragraph 66]. The AR image may be a three-dimensional image [paragraph 83], and can the functions of the simulation device may be implemented using a virtual server function or the like on a cloud [paragraphs 149-151]. Updating the position information is a form of synthesizing data measurement points). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Ito with wherein measurement results obtained at the plurality of measurement points are synthesized to acquire three-dimensional point cloud data including the position information of the target as taught by Ooba so that the system can update its measurements of the target position information. Regarding Claim 13. Ito in combination with Ooba teaches the information processing system according to claim 10. Ito also teaches: wherein after synthesizing the measurement results, filter processing is performed on the three-dimensional point cloud data (Paragraph 59 describes how noise can be eliminated through edge extraction, and improvement in detection accuracy can be expected as a result). Regarding Claim 14. Ito teaches the information processing system according to claim 1. Ito does not teach: wherein the simulation unit is configured to display the virtual model of the target set in the virtual space on a display unit. However, Ooba teaches: wherein the simulation unit is configured to display the virtual model of the target set in the virtual space on a display unit (The display unit 220 of FIG. 1 is a liquid crystal display or the like, and displays the real space image captured by the camera 210 together with AR image data created by the conveyance simulation device 100 [paragraph 31]. Further, as shown in FIG. 4, the control unit 110 may generate an AR image of a reference coordinate system OC that is set with respect to the robot 300, and an AR image of the conveying direction MD of the virtual conveyance unit 120. These AR images allow the user of the augmented reality display device 200 to visually check a relationship between the reference coordinate system OC and the conveying direction MD, and to check whether the conveying direction MD of the virtual conveyance unit 120 is suitable with respect to the settings such as the reference coordinate system OC [paragraph 80]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Ito with wherein the simulation unit is configured to display the virtual model of the target set in the virtual space on a display unit as taught by Ooba so that the user can view the virtual model of the target in virtual space. Regarding Claim 15. Ito teaches the information processing system according to claim 1. Ito does not teach: wherein a setting screen with which information regarding the measurement unit and information regarding a measurement area related to the measurement are settable by a user is displayed. However, Ooba teaches: wherein a setting screen with which information regarding the measurement unit and information regarding a measurement area related to the measurement are settable by a user is displayed (The display unit 220 of FIG. 1 is a liquid crystal display or the like, and displays the real space image captured by the camera 210 together with AR image data created by the conveyance simulation device 100 [paragraph 31]. Further, as shown in FIG. 4, the control unit 110 may generate an AR image of a reference coordinate system OC that is set with respect to the robot 300, and an AR image of the conveying direction MD of the virtual conveyance unit 120. These AR images allow the user of the augmented reality display device 200 to visually check a relationship between the reference coordinate system OC and the conveying direction MD, and to check whether the conveying direction MD of the virtual conveyance unit 120 is suitable with respect to the settings such as the reference coordinate system OC [paragraph 80]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Ito with wherein a setting screen with which information regarding the measurement unit and information regarding a measurement area related to the measurement are settable by a user is displayed as taught by Ooba so as to allow the user to view the settings they have input. Regarding Claim 16. Ito in combination with Ooba teaches the information processing system according to claim 15. Ito also teaches, as best can be understood: wherein the predetermined measurement points are automatically acquired based on the information set (The display unit 220 of FIG. 1 is a liquid crystal display or the like, and displays the real space image captured by the camera 210 together with AR image data created by the conveyance simulation device 100 [paragraph 31]. Further, as shown in FIG. 4, the control unit 110 may generate an AR image of a reference coordinate system OC that is set with respect to the robot 300, and an AR image of the conveying direction MD of the virtual conveyance unit 120. These AR images allow the user of the augmented reality display device 200 to visually check a relationship between the reference coordinate system OC and the conveying direction MD, and to check whether the conveying direction MD of the virtual conveyance unit 120 is suitable with respect to the settings such as the reference coordinate system OC [paragraph 80]. The system performs the process of acquiring the measurement point once these settings are input). Ito does not teach: the information is set using the setting screen. However, Ooba teaches: the information is set using the setting screen (The display unit 220 of FIG. 1 is a liquid crystal display or the like, and displays the real space image captured by the camera 210 together with AR image data created by the conveyance simulation device 100 [paragraph 31]. Further, as shown in FIG. 4, the control unit 110 may generate an AR image of a reference coordinate system OC that is set with respect to the robot 300, and an AR image of the conveying direction MD of the virtual conveyance unit 120. These AR images allow the user of the augmented reality display device 200 to visually check a relationship between the reference coordinate system OC and the conveying direction MD, and to check whether the conveying direction MD of the virtual conveyance unit 120 is suitable with respect to the settings such as the reference coordinate system OC [paragraph 80]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Ito with the information is set using the setting screen as taught by Ooba so as to allow the user to view the settings they have input. Regarding Claim 17. Ito in combination with Ooba teaches the information processing system according to claim 6. Ito also teaches: wherein at least two postures of the measurement unit are set at the predetermined measurement points (The hand orientation capable of gripping a single portion is not always defined uniquely, and usually, as shown in FIGS. 8A and 8B, the orientations in a certain range in which the hand is rotated around the gripped portion are grippable orientations. A plurality of orientations determined by sampling the orientation within this range at every prescribed angle are extracted as the candidate hand gripping orientations [paragraph 71]). in a case where the measurement unit interferes with the movement-prohibited area due to movement of the measurement unit to the predetermined measurement points, the predetermined measurement points interfering with the movement-prohibited area is excluded. However, Ooba teaches: in a case where the measurement unit interferes with the movement-prohibited area due to movement of the measurement unit to the predetermined measurement points, the predetermined measurement points interfering with the movement-prohibited area is excluded (Further, the control unit 110 may generate an AR image of a motion restriction area of the robot 300. The motion restriction area is set around an operator, peripheral devices, or the like, and is an area where the motion of the robot 300 is stopped or restricted. With the AR image of the motion restriction area, the user of the augmented reality display device 200 can intuitively recognize a set range of the motion restriction area [paragraph 79]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Ito with in a case where the measurement unit interferes with the movement-prohibited area due to movement of the measurement unit to the predetermined measurement points, the predetermined measurement points interfering with the movement-prohibited area is excluded as taught by Ooba so as to allow the system to prevent the robot from moving through an area that the robot should avoid, particularly to prevent collisions with obstacles and personnel. Regarding Claim 20. Ito in combination with Ooba teaches the information processing system according to claim 14. Ito does not teach: wherein the simulation unit is configured to display the virtual model of the target set in the virtual space on a tablet terminal or a head mounted display. However, Ooba teaches: wherein the simulation unit is configured to display the virtual model of the target set in the virtual space on a tablet terminal or a head mounted display (The augmented reality display device 200 is embodied as a smartphone, a tablet terminal, a head-mounted display, glasses with augmented reality (AR) display function, or the like [paragraph 29]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Ito with wherein the simulation unit is configured to display the virtual model of the target set in the virtual space on a tablet terminal or a head mounted display as taught by Ooba so as to allow the user to view the virtual model on a portable device, rather than rely on a stationary display device. Regarding Claim 21. Ito in combination with Ooba teaches the information processing system according to claim 14. Ito does not teach: wherein the simulation unit is configured to display the virtual model of the target in a form of any one of virtual reality (VR), augmented reality (AR), mixed reality (MR), and cross reality (XR). However, Ooba teaches: wherein the simulation unit is configured to display the virtual model of the target in a form of any one of virtual reality (VR), augmented reality (AR), mixed reality (MR), and cross reality (XR) (The display unit 220 of FIG. 1 is a liquid crystal display or the like, and displays the real space image captured by the camera 210 together with AR image data created by the conveyance simulation device 100 [paragraph 31]. Further, as shown in FIG. 4, the control unit 110 may generate an AR image of a reference coordinate system OC that is set with respect to the robot 300, and an AR image of the conveying direction MD of the virtual conveyance unit 120. These AR images allow the user of the augmented reality display device 200 to visually check a relationship between the reference coordinate system OC and the conveying direction MD, and to check whether the conveying direction MD of the virtual conveyance unit 120 is suitable with respect to the settings such as the reference coordinate system OC [paragraph 80]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Ito with wherein the simulation unit is configured to display the virtual model of the target in a form of any one of virtual reality (VR), augmented reality (AR), mixed reality (MR), and cross reality (XR) as taught by Ooba because this is a common solution in the art for displaying a virtual model, particularly for a virtual reality model. Regarding Claim 27. Ito teaches a method for manufacturing an article by using a robot system, the method comprising: performing, by using the robot system control method according to claim 26. Ito does not teach: a simulation related to an operation of the robot for manufacturing the article in the virtual space; creating a control program for the robot related to the manufacturing of the article; and operating the robot by using the control program to manufacture the article. However, Ooba teaches: a simulation related to an operation of the robot for manufacturing the article in the virtual space (FIG. 4 shows an example of what is displayed by an augmented reality display device, including a robot working on articles on a conveyor belt); creating a control program for the robot related to the manufacturing of the article (paragraph 33); and operating the robot by using the control program to manufacture the article (paragraph 33). Claim(s) 18 is rejected under 35 U.S.C. 103 as being unpatentable over Ito US 20130054025 A1 (“Ito”) as applied to claim 1 above, and further in view of Keshmiri et al. US 20170364076 A1 (“Keshmiri”). Regarding Claim 18. Ito teaches the information processing system according to claim 1. Ito does not explicitly teach: wherein the predetermined measurement points that is outside a movable range of the device is excluded. Ito does show in FIG. 10 an example of a virtual space, wherein the robot arm 101 and the hand mechanism 102 are modeled based on connected links 1001, each link being an axis extending from one joint to the next joint, using cuboids respectively fixed to the links 1001. The connected links have mechanical constraints on the angles therebetween, and possible rotational direction and angles are within a prescribed range [paragraph 97], which means that the only possible measurement points that are included are points within the range of the robot links. However, Ito does not explicitly state that points outside of the robot’s movable range are excluded. However, Keshmiri teaches: wherein the predetermined measurement points that is outside a movable range of the device is excluded (Typically, path planning for a robotic system is accomplished through either a series of rapid moves of a robot executed in a joint space or a series of feed moves of a robot executed in a Cartesian space. In particular, joint-space planning involves planning joint moves of a robot by specifying a set of joint parameters that describes the configuration of the robot. Joint-space planning does not require verification of the path for robotic errors, such as singularity errors or out of reach errors [paragraph 6]. Specifically, for the tessellated points that are associated with a certain type of error (e.g., singularity, collision and/or out-of-reach), the points are displayed in a specific color, along with segments of the path connecting these points. For example, an affected path segment associated with (i) a singularity error can be coded red, (ii) a collision error can be coded dark red and (iii) an out-of-reach error can be coded blue, where the robot is unlikely to be able to reach that path section [paragraph 48]. This reads on excluding points that are out-of-reach of the robot). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Ito with wherein the predetermined measurement points that is outside a movable range of the device is excluded as taught by Keshmiri because excluding the points that the robot cannot move to would save the robot system on processing data in displaying its virtual model, and leave the virtual model less cluttered with irrelevant points to which the movement cannot reach. Claim(s) 19 is rejected under 35 U.S.C. 103 as being unpatentable over Ito US 20130054025 A1 (“Ito”). Regarding Claim 19. Ito teaches the information processing system according to claim 1. Ito also teaches, as best as can be understood: wherein the predetermined measurement points are divided into at least two layers based on a measurement area related to the measurement (The robot is capable of selecting a component located in an uppermost layer, which implies that there is at least one layer beneath it [paragraph 37], which reads on at least two layers in a measurement area), and a point interfering with the model is excluded from the predetermined measurement points based on the layers (this is implied. There is no mention of a point interfering with the model being included, and it would have been obvious to one of ordinary skill in the art to extend the disclosure of Ito to not include the interfering measurement point). Claim(s) 22 is rejected under 35 U.S.C. 103 as being unpatentable over Ito US 20130054025 A1 (“Ito”) as applied to claim 1 above, and further in view of Keshmiri et al. US 20170364076 A1 (“Keshmiri”). Regarding Claim 22. Ito teaches the information processing system according to claim 1. Ito also teaches: wherein the target is placed on a pedestal (FIGS. 1 and 11A both show a pedestal at 116 where the target object is located at 104). Ito does not teach: wherein the device is mounted on a carriage. However, Keshmiri teaches: wherein the device is mounted on a carriage (The robot arm shown in FIG. 3 includes a base at 302, which may be a stationary base or a movable base, such as a base including wheels (not shown) [paragraph 43]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Ito with wherein the device is mounted on a carriage as taught by Reekmans so as to allow the system of Ito to be applied to a mobile robot. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON G CAIN whose telephone number is (571)272-7009. The examiner can normally be reached Monday: 7:30am - 4:30pm EST to Friday 7:30pm - 4:30am. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached at (571) 270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AARON G CAIN/Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Apr 17, 2023
Application Filed
Apr 15, 2025
Non-Final Rejection — §102, §103
Jul 18, 2025
Response Filed
Aug 25, 2025
Final Rejection — §102, §103
Oct 28, 2025
Response after Non-Final Action
Nov 18, 2025
Request for Continued Examination
Dec 01, 2025
Response after Non-Final Action
Jan 26, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573302
METHOD FOR INFRASTRUCTURE-SUPPORTED ASSISTING OF A MOTOR VEHICLE
2y 5m to grant Granted Mar 10, 2026
Patent 12558790
METHOD AND COMPUTING SYSTEMS FOR PERFORMING OBJECT DETECTION
2y 5m to grant Granted Feb 24, 2026
Patent 12552019
MACHINE LEARNING METHOD AND ROBOT SYSTEM
2y 5m to grant Granted Feb 17, 2026
Patent 12544144
DENTAL ROBOT AND ORAL NAVIGATION METHOD
2y 5m to grant Granted Feb 10, 2026
Patent 12541205
MOVEMENT CONTROL SUPPORT DEVICE AND METHOD
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
40%
Grant Probability
66%
With Interview (+26.1%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 130 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month