Prosecution Insights
Last updated: April 19, 2026
Application No. 18/033,059

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM

Non-Final OA §103
Filed
Apr 20, 2023
Examiner
O'MALLEY, JOHN MARTIN
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Avatarin Inc.
OA Round
3 (Non-Final)
33%
Grant Probability
At Risk
3-4
OA Rounds
3y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
1 granted / 3 resolved
-18.7% vs TC avg
Minimal -33% lift
Without
With
+-33.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
40 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
9.2%
-30.8% vs TC avg
§103
70.7%
+30.7% vs TC avg
§102
14.4%
-25.6% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of claims The following claims have been rejected or allowed for the following reasons: Claim(s) 1-20 is rejected under 35 USC § 103 Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JP 2020-176568 filed on 10/21/2020. Information Disclosure Statement The information disclosure statement/statements (IDS) were filed on 04/20/2023, 06/25/2024, 10/08/2024, 9/3/25 and 10/2/25. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-3, 11, 13-18 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over as applied to Bareddy (US 9821455 B1), in further view of Baroudi (US 20200009729 A1). Regarding claim 1 Bareddy teaches An information processing device comprising: a processor; and a storage unit, wherein the processor is configured to execute a program stored in the storage unit to perform a control method including; acquiring information on a selection condition; (Bareddy column 8 lines 58-61 reads “The available robot selection engine 152 is configured to determine if a robot is available that is suited to perform an identified robot task and, if so, to select the robot to perform the task.”); setting a first robot as an operation target to be operated by a terminal device; (Bareddy column 1 lines 20 – 24 read “Also, for instance, some telepresence robots may be able to navigate to various locations autonomously, semi-autonomously, and/or based on control commands provided by a user via a computing device of the user.”); and enabling the operation target to be switched from the first robot to a second robot, which is different from the first robot, among a plurality of operable robots including robots selected in accordance with the selection condition, (Bareddy column 1 lines 55 – 63 read “Based on determining the need, a second telepresence robot is selected to “replace” the first telepresence robot in performing the task. The second telepresence robot may replace the first telepresence robot in performing the task by directing the second telepresence robot to navigate to a location proximal to the first telepresence robot and transitioning the first telepresence robot's session to the second telepresence robot.”); Bareddy does not teach wherein the plurality of operable robots further include robots selected based on information specified by image recognition processing for images captured by the first robot, wherein the information specified by the image recognition processing represents one or more objects identified in the images captured by the first robot. Baroudi in analogous art, teaches wherein the plurality of operable robots further include robots selected based on information specified (Baroudi [0172 – 0174] reads “Method 1700 includes detecting 1702 a trigger event associated with a task to be performed. The trigger event may be detected by sensors placed on mobile objects, such as farm equipment, or other equipment placed within an predetermined environment, and configured to detect certain biological or hazardous situations. … Additionally, the sensors may be cameras that detect accidents, or certain situations where humans need assistance, such as overcrowding and the like. For example, a camera may detect an overcrowded area, within a stadium, an event, concert, pilgrimage or the like, and as a result, a few people may get hurt or injured. The camera can detect that people are on the floor and may need assistance or rescue. Accordingly, a trigger event associated with a task to perform a rescue operation, for example, is detected and transmitted.” And “The robotic network may be a network preconfigured to perform certain tasks (e.g., rescue operations) or the broadcast signal could be randomly transmitted to a plurality of robotic networks. The robotic network includes one or more robots, and the broadcast signal includes information associated with the task to be performed, the information including the trigger event (e.g., accident), the type of task to be performed (e.g., rescue)” by image recognition processing for images captured by the first robot, wherein the information specified by the image recognition processing represents one or more objects identified in the images captured by the first robot (Baroudi [0095] reads “In one example, a scenario of imaging objects (plants) for a disease diagnostic system may be used to further illustrate the task allocation methodology. A team of robots equipped with appropriate sensors integrated with a remote sensing system, such as that illustrated in FIG. 6, can be used in a useful farming scenario application. This application includes two systems: remote sensing and near-range sensing. The aim of remote sensing is to detect and diagnose any unhealthy symptoms in an area of interest, such as diseases, weeds, and pests. If any disease is detected, the remote sensing advertises a new task for the near-range sensing system, which is a team of robots equipped with appropriate sensors, such as a camera, thermography, chlorophyll fluorescence and hyperspectral sensors. The remote sensing system provides necessary information about a task, such as its location, the amount of resources required (fertilizers/pesticides), and the required quality. A robot's quality is the quality of its camera resolution, whereas a task's quality requirement is the resolution of the image required (since different diseases require different image resolutions to be detected).”); It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Bareddy with that of Baroudi to include a method that would allow the robotic system to select robots for a task based on the situation around them. This would allow the system to have improved capabilities over a single robot. (Baroudi [0003] reads “Robotic networks attract researchers' attention, because of their ability to incorporate multiple technological platforms, including computational, sensing, communications and movement platforms. Additionally, such networks are suitable to be used in a wide range of applications where human intervention is limited or denied.”); Regarding claim 2 Bareddy/Baroudi teaches The information processing device according to claim 1, wherein the enabling of the operation target to be switched is performed when predetermined conditions regarding a user's input, movement of the first robot, or an elapsed time are satisfied. (Bareddy column 17 lines 19 – 27 read “For example, the system may initiate the replacement of an active session between the task engaged robot and a computing device of the user with a new session between the selected available robot and the computing device of the user. In some implementations, the system may initiate the replacement in response to the available robot reaching the location proximal to the task engaged robot.”); Regarding claim 3 Bareddy/Baroudi teaches The information processing device according to claim 1, wherein the plurality of operable robots further include a robot associated with the one of the robots selected in accordance with the selection condition. (Bareddy column 7 lines 42 – 58 read “although only a single user 101 and a single computing device 120 are illustrated in FIG. 1, it is understood that in many implementations multiple users and/or multiple computing devices may be provided. For example, in some implementations multiple users may utilize respective computing devices to establish sessions with multiple of the telepresence robots 130A-C (and/or additional unillustrated robots) to enable performance of multiple robot tasks.”); Regarding claim 11 Bareddy/Baroudi teaches The information processing device according to claim 1, wherein the plurality of robots are located at places separated from each other by a predetermined distance or more. (Bareddy column 5 lines 28 – 36 read “For example, the telepresence robots 130A-C may be located in a building (e.g., a warehouse, a manufacturing facility, an office building), in one or more buildings of a collection of nearby buildings, in one or more floors of a multi-floor office or other building, etc. Additional and/or alternative robots may be provided in other implementations, such as additional telepresence robots that vary in one or more respects from those illustrated in FIG. 1.”); Regarding claim 13 Bareddy teaches An information processing method performed by a computer, the information processing method comprising: acquiring information on a selection condition; (Bareddy column 8 lines 58-61 reads “The available robot selection engine 152 is configured to determine if a robot is available that is suited to perform an identified robot task and, if so, to select the robot to perform the task.”); setting a first robot, among a plurality of operable robots including robots selected in accordance with the selection condition, as an operation target to be operated by a terminal device; (Bareddy column 1 lines 20 – 24 read “Also, for instance, some telepresence robots may be able to navigate to various locations autonomously, semi-autonomously, and/or based on control commands provided by a user via a computing device of the user.”); and enabling the operation target to be switched from the first robot to a second robot, which is different from the first robot, among the plurality of operable robots, (Bareddy column 1 lines 55 – 63 read “Based on determining the need, a second telepresence robot is selected to “replace” the first telepresence robot in performing the task. The second telepresence robot may replace the first telepresence robot in performing the task by directing the second telepresence robot to navigate to a location proximal to the first telepresence robot and transitioning the first telepresence robot's session to the second telepresence robot.”); Bareddy does not teach wherein the plurality of operable robots further include robots selected based on information specified by image recognition processing for images captured by the first robot, wherein the information specified by the image recognition processing represents one or more objects identified in the images captured by the first robot. Baroudi in analogous art, teaches wherein the plurality of operable robots further include robots selected based on information specified (Baroudi [0172 – 0174] reads “Method 1700 includes detecting 1702 a trigger event associated with a task to be performed. The trigger event may be detected by sensors placed on mobile objects, such as farm equipment, or other equipment placed within an predetermined environment, and configured to detect certain biological or hazardous situations. … Additionally, the sensors may be cameras that detect accidents, or certain situations where humans need assistance, such as overcrowding and the like. For example, a camera may detect an overcrowded area, within a stadium, an event, concert, pilgrimage or the like, and as a result, a few people may get hurt or injured. The camera can detect that people are on the floor and may need assistance or rescue. Accordingly, a trigger event associated with a task to perform a rescue operation, for example, is detected and transmitted.” And “The robotic network may be a network preconfigured to perform certain tasks (e.g., rescue operations) or the broadcast signal could be randomly transmitted to a plurality of robotic networks. The robotic network includes one or more robots, and the broadcast signal includes information associated with the task to be performed, the information including the trigger event (e.g., accident), the type of task to be performed (e.g., rescue)”); by image recognition processing for images captured by the first robot, wherein the information specified by the image recognition processing represents one or more objects identified in the images captured by the first robot. (Baroudi [0095] reads “In one example, a scenario of imaging objects (plants) for a disease diagnostic system may be used to further illustrate the task allocation methodology. A team of robots equipped with appropriate sensors integrated with a remote sensing system, such as that illustrated in FIG. 6, can be used in a useful farming scenario application. This application includes two systems: remote sensing and near-range sensing. The aim of remote sensing is to detect and diagnose any unhealthy symptoms in an area of interest, such as diseases, weeds, and pests. If any disease is detected, the remote sensing advertises a new task for the near-range sensing system, which is a team of robots equipped with appropriate sensors, such as a camera, thermography, chlorophyll fluorescence and hyperspectral sensors. The remote sensing system provides necessary information about a task, such as its location, the amount of resources required (fertilizers/pesticides), and the required quality. A robot's quality is the quality of its camera resolution, whereas a task's quality requirement is the resolution of the image required (since different diseases require different image resolutions to be detected).”); It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Bareddy with that of Baroudi to include a method that would allow the robotic system to select robots for a task based on the situation around them. This would allow the system to have improved capabilities over a single robot. (Baroudi [0003] reads “Robotic networks attract researchers' attention, because of their ability to incorporate multiple technological platforms, including computational, sensing, communications and movement platforms. Additionally, such networks are suitable to be used in a wide range of applications where human intervention is limited or denied.”); Regarding claim 14 Bareddy teaches A non-transitory storage medium storing a program for executing an information processing method including causing a computer to: acquire information on a selection condition; (Bareddy column 8 lines 58-61 reads “The available robot selection engine 152 is configured to determine if a robot is available that is suited to perform an identified robot task and, if so, to select the robot to perform the task.”); set a first robot, among a plurality of operable robots including robots selected in accordance with the selection condition, as an operation target to be operated by a terminal device; (Bareddy column 1 lines 20 – 24 read “Also, for instance, some telepresence robots may be able to navigate to various locations autonomously, semi-autonomously, and/or based on control commands provided by a user via a computing device of the user.”); and enable the operation target to be switched from the first robot to a second robot, which is different from the first robot, among the plurality of operable robots, (Bareddy column 1 lines 55 – 63 read “Based on determining the need, a second telepresence robot is selected to “replace” the first telepresence robot in performing the task. The second telepresence robot may replace the first telepresence robot in performing the task by directing the second telepresence robot to navigate to a location proximal to the first telepresence robot and transitioning the first telepresence robot's session to the second telepresence robot.”); Bareddy does not teach wherein the plurality of operable robots further include robots selected based on information specified by image recognition processing for images captured by the first robot, wherein the information specified by the image recognition processing represents one or more objects identified in the images captured by the first robot. Baroudi in analogous art, teaches wherein the plurality of operable robots further include robots selected based on information specified (Baroudi [0172 – 0174] reads “Method 1700 includes detecting 1702 a trigger event associated with a task to be performed. The trigger event may be detected by sensors placed on mobile objects, such as farm equipment, or other equipment placed within an predetermined environment, and configured to detect certain biological or hazardous situations. … Additionally, the sensors may be cameras that detect accidents, or certain situations where humans need assistance, such as overcrowding and the like. For example, a camera may detect an overcrowded area, within a stadium, an event, concert, pilgrimage or the like, and as a result, a few people may get hurt or injured. The camera can detect that people are on the floor and may need assistance or rescue. Accordingly, a trigger event associated with a task to perform a rescue operation, for example, is detected and transmitted.” And “The robotic network may be a network preconfigured to perform certain tasks (e.g., rescue operations) or the broadcast signal could be randomly transmitted to a plurality of robotic networks. The robotic network includes one or more robots, and the broadcast signal includes information associated with the task to be performed, the information including the trigger event (e.g., accident), the type of task to be performed (e.g., rescue)”); by image recognition processing for images captured by the first robot, wherein the information specified by the image recognition processing represents one or more objects identified in the images captured by the first robot. (Baroudi [0095] reads “In one example, a scenario of imaging objects (plants) for a disease diagnostic system may be used to further illustrate the task allocation methodology. A team of robots equipped with appropriate sensors integrated with a remote sensing system, such as that illustrated in FIG. 6, can be used in a useful farming scenario application. This application includes two systems: remote sensing and near-range sensing. The aim of remote sensing is to detect and diagnose any unhealthy symptoms in an area of interest, such as diseases, weeds, and pests. If any disease is detected, the remote sensing advertises a new task for the near-range sensing system, which is a team of robots equipped with appropriate sensors, such as a camera, thermography, chlorophyll fluorescence and hyperspectral sensors. The remote sensing system provides necessary information about a task, such as its location, the amount of resources required (fertilizers/pesticides), and the required quality. A robot's quality is the quality of its camera resolution, whereas a task's quality requirement is the resolution of the image required (since different diseases require different image resolutions to be detected).”); It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Bareddy with that of Baroudi to include a method that would allow the robotic system to select robots for a task based on the situation around them. This would allow the system to have improved capabilities over a single robot. (Baroudi [0003] reads “Robotic networks attract researchers' attention, because of their ability to incorporate multiple technological platforms, including computational, sensing, communications and movement platforms. Additionally, such networks are suitable to be used in a wide range of applications where human intervention is limited or denied.”); Regarding claim 15 Bareddy/Baroudi teaches The information processing device of claim 1, wherein the robots selected based on the information specified by the image recognition processing are robots that are related to the one or more objects. (Baroudi [0095] reads “In one example, a scenario of imaging objects (plants) for a disease diagnostic system may be used to further illustrate the task allocation methodology. A team of robots equipped with appropriate sensors integrated with a remote sensing system, such as that illustrated in FIG. 6, can be used in a useful farming scenario application. This application includes two systems: remote sensing and near-range sensing. The aim of remote sensing is to detect and diagnose any unhealthy symptoms in an area of interest, such as diseases, weeds, and pests. If any disease is detected, the remote sensing advertises a new task for the near-range sensing system, which is a team of robots equipped with appropriate sensors, such as a camera, thermography, chlorophyll fluorescence and hyperspectral sensors. The remote sensing system provides necessary information about a task, such as its location, the amount of resources required (fertilizers/pesticides), and the required quality. A robot's quality is the quality of its camera resolution, whereas a task's quality requirement is the resolution of the image required (since different diseases require different image resolutions to be detected).”); Regarding claim 16 Bareddy/Baroudi teaches The information processing device of claim 15, wherein the robots that are related to the one or more objects are robots positioned at a location associated with an origin of a respective one of the one or more objects. (Baroudi [0095] reads “In one example, a scenario of imaging objects (plants) for a disease diagnostic system may be used to further illustrate the task allocation methodology. A team of robots equipped with appropriate sensors integrated with a remote sensing system, such as that illustrated in FIG. 6, can be used in a useful farming scenario application. This application includes two systems: remote sensing and near-range sensing. The aim of remote sensing is to detect and diagnose any unhealthy symptoms in an area of interest, such as diseases, weeds, and pests. If any disease is detected, the remote sensing advertises a new task for the near-range sensing system, which is a team of robots equipped with appropriate sensors, such as a camera, thermography, chlorophyll fluorescence and hyperspectral sensors. The remote sensing system provides necessary information about a task, such as its location, the amount of resources required (fertilizers/pesticides), and the required quality. A robot's quality is the quality of its camera resolution, whereas a task's quality requirement is the resolution of the image required (since different diseases require different image resolutions to be detected).” It would be appreciated by one with ordinary skill in the art that the origin location of a set of robots that would be used to monitor a farm would be located and kept at or near the farm, which would be considered the origin of the desired object.); Regarding claim 17 Bareddy/Baroudi teaches The information processing device of claim 15, wherein the robots that are related to the one or more objects are robots classified into a group corresponding to a category of the identified one or more objects. (Baroudi [0010 – 0011] reads “According to an exemplary embodiment, a dynamic multi-objective task allocation system within a robotic network may be deployed to assign one or more tasks in real-time as the tasks are detected, the dynamic multi-objective task allocation system comprising a sensing device including circuitry configured to detect a trigger event, the trigger event associated with a task to be performed, and transmit a broadcast signal to a designated robotic network, the robotic network including one or more robots, the broadcast signal including information associated with the task to be performed, the information including the trigger event, the type of task to be performed, and a location where the task is to be performed; a distribution robot within the robotic network, the distribution robot including circuitry configured to receive the broadcast signal from the sensing device, perform a self-assessment associated with performing the task and assign itself a self-assessment score, transmit, to one or more receiving robots within the robotic network, a request for submission of an assessment score of each one of the one or more robots, the assessment score being a self-generated score generated at each receiving robot and reflecting a score of each receiving robot's ability to perform the task, the assessment score based on a quality metric, a distance metric, and a workload metric, receive one or more submissions from the one or more robots, the one or more submissions including the assessment score, compare the received one or more assessment scores with the self-assessment score, and select an assessment score, the selected assessment score being the lowest score from among the self-score and the one or more assessment scores. In one embodiment, the distribution robot circuitry may further transmit a selection notification to one of the one or more receiving robots, the notification indicating that the one of the one or more receiving robots is selected to perform the task.”); Regarding claim 18 Bareddy/Baroudi teaches The information processing device of claim 15, wherein the robots that are related to the one or more objects are robots placed at a location thematically associated with a type of the identified one or more objects. (Baroudi [0095] reads “In one example, a scenario of imaging objects (plants) for a disease diagnostic system may be used to further illustrate the task allocation methodology. A team of robots equipped with appropriate sensors integrated with a remote sensing system, such as that illustrated in FIG. 6, can be used in a useful farming scenario application. This application includes two systems: remote sensing and near-range sensing. The aim of remote sensing is to detect and diagnose any unhealthy symptoms in an area of interest, such as diseases, weeds, and pests. If any disease is detected, the remote sensing advertises a new task for the near-range sensing system, which is a team of robots equipped with appropriate sensors, such as a camera, thermography, chlorophyll fluorescence and hyperspectral sensors. The remote sensing system provides necessary information about a task, such as its location, the amount of resources required (fertilizers/pesticides), and the required quality. A robot's quality is the quality of its camera resolution, whereas a task's quality requirement is the resolution of the image required (since different diseases require different image resolutions to be detected).”); Regarding claim 20 Bareddy/Baroudi teaches The information processing device of claim 1, wherein the plurality of operable robots are classified into groups that are determined based on information related to the one or more objects identified in the images captured by the first robot. (Baroudi [0010 – 0011] reads “According to an exemplary embodiment, a dynamic multi-objective task allocation system within a robotic network may be deployed to assign one or more tasks in real-time as the tasks are detected, the dynamic multi-objective task allocation system comprising a sensing device including circuitry configured to detect a trigger event, the trigger event associated with a task to be performed, and transmit a broadcast signal to a designated robotic network, the robotic network including one or more robots, the broadcast signal including information associated with the task to be performed, the information including the trigger event, the type of task to be performed, and a location where the task is to be performed; a distribution robot within the robotic network, the distribution robot including circuitry configured to receive the broadcast signal from the sensing device, perform a self-assessment associated with performing the task and assign itself a self-assessment score, transmit, to one or more receiving robots within the robotic network, a request for submission of an assessment score of each one of the one or more robots, the assessment score being a self-generated score generated at each receiving robot and reflecting a score of each receiving robot's ability to perform the task, the assessment score based on a quality metric, a distance metric, and a workload metric, receive one or more submissions from the one or more robots, the one or more submissions including the assessment score, compare the received one or more assessment scores with the self-assessment score, and select an assessment score, the selected assessment score being the lowest score from among the self-score and the one or more assessment scores. In one embodiment, the distribution robot circuitry may further transmit a selection notification to one of the one or more receiving robots, the notification indicating that the one of the one or more receiving robots is selected to perform the task.”); Claim(s) 5-8, 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over as applied to Bareddy/Baroudi, in further view of Wood (US 20090326735 A1). Regarding claim 3 Bareddy/Baroudi teaches The information processing device according to claim 1. Bareddy/Baroudi does not teach wherein the plurality of robots selected include the robot associated with the robot specified in accordance with the selection condition. Wood in analogous art, teaches plurality of robots selected include the robot associated with the robot specified in accordance with the selection condition. (Wood [0017] teaches “Moreover, the unmanned and autonomous vehicle could operate in a group of vehicles. In this aspect, the group of vehicles may include one or more vehicles from each class of vehicle including one or more of land-based, one or more of water-based, and/or one or more of air-based vehicles.”); It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have combined the teachings of Bareddy/Baroudi with that of Wood to include a method that would allow for a grouping of different vehicles to better work together. This would provide the system with an improved capability of remotely operated robots. (Wood 0004] reads “Traditional Unmanned Aircraft Systems (UAS) are remotely piloted or execute a preplanned route plan. They have limited capability to respond to a dynamic battlespace environment without direct human intervention to replan and transmit new instructions to the UAS. Traditional UAS also have limited capability to collaborate with other manned or unmanned aircraft without human intervention. These limitations make it difficult to deploy large numbers of UAS to support a tactical mission without a large and expensive commitment of human resources to monitor and control each aircraft.”); Regarding claim 5 Bareddy/Baroudi teaches The information processing device according to claim 1. Bareddy/Baroudi does not teach wherein the plurality of robots selected include the robots classified into the same group specified in accordance with the selection condition. Wood in analogous art, teaches wherein the plurality of robots selected include the robots classified into the same group specified in accordance with the selection condition. (Wood [0017] reads “In the description that follows, the unmanned and autonomous vehicle will be described, merely for the convenience to the reader and to simplify the descriptions, as an aircraft or UAV (Unmanned Aerial Vehicle). However, the unmanned and autonomous vehicle could be a land-based, water-based, or air-based. Moreover, the unmanned and autonomous vehicle could operate in a group of vehicles. In this aspect, the group of vehicles may include one or more vehicles from each class of vehicle including one or more of land-based, one or more of water-based, and/or one or more of air-based vehicles.”); It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have combined the teachings of Bareddy/Baroudi with that of Wood to include a method that would allow for a grouping of different vehicles to better work together. This would provide the system with an improved capability of remotely operated robots. (Wood 0004] reads “Traditional Unmanned Aircraft Systems (UAS) are remotely piloted or execute a preplanned route plan. They have limited capability to respond to a dynamic battlespace environment without direct human intervention to replan and transmit new instructions to the UAS. Traditional UAS also have limited capability to collaborate with other manned or unmanned aircraft without human intervention. These limitations make it difficult to deploy large numbers of UAS to support a tactical mission without a large and expensive commitment of human resources to monitor and control each aircraft.”); Regarding claim 6 Bareddy/Baroudi/Valocky teaches The information processing device according to claim 5, wherein the classification of the robots into the group includes being determined in accordance with meta information which is set for the robots in advance. (Wood [0017] reads “Moreover, the unmanned and autonomous vehicle could operate in a group of vehicles. In this aspect, the group of vehicles may include one or more vehicles from each class of vehicle including one or more of land-based, one or more of water-based, and/or one or more of air-based vehicles.”); Regarding claim 7 Bareddy/Baroudi/Wood teaches The information processing device according to claim 5, wherein the classification of the robots into the group includes being determined in accordance with contents capable of being experienced by a user through the robots. (Wood [0017] reads “In the description that follows, the unmanned and autonomous vehicle will be described, merely for the convenience to the reader and to simplify the descriptions, as an aircraft or UAV (Unmanned Aerial Vehicle). However, the unmanned and autonomous vehicle could be a land-based, water-based, or air-based.”); Regarding claim 8 Bareddy/Baroudi/Wood The information processing according to claim 5, wherein the classification of the robots into the group includes being determined based on information specified (Wood [0017] reads “In the description that follows, the unmanned and autonomous vehicle will be described, merely for the convenience to the reader and to simplify the descriptions, as an aircraft or UAV (Unmanned Aerial Vehicle). However, the unmanned and autonomous vehicle could be a land-based, water-based, or air-based. Moreover, the unmanned and autonomous vehicle could operate in a group of vehicles. In this aspect, the group of vehicles may include one or more vehicles from each class of vehicle including one or more of land-based, one or more of water-based, and/or one or more of air-based vehicles.”); Regarding claim 10 Bareddy/Baroudi teaches The information processing device according to claim 1. Bareddy/Baroudi does not teach wherein the information on the selection condition includes information on contents capable of being experienced by the user through the robot. Wood in analogous art, teaches wherein the information on the selection condition includes information on contents capable of being experienced by the user through the robot. (Wood [0017] reads “In the description that follows, the unmanned and autonomous vehicle will be described, merely for the convenience to the reader and to simplify the descriptions, as an aircraft or UAV (Unmanned Aerial Vehicle). However, the unmanned and autonomous vehicle could be a land-based, water-based, or air-based.”); It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have combined the teachings of Bareddy/Baroudi with that of Wood to include a method that would allow for a grouping of different vehicles to better work together. This would provide the system with an improved capability of remotely operated robots. (Wood 0004] reads “Traditional Unmanned Aircraft Systems (UAS) are remotely piloted or execute a preplanned route plan. They have limited capability to respond to a dynamic battlespace environment without direct human intervention to replan and transmit new instructions to the UAS. Traditional UAS also have limited capability to collaborate with other manned or unmanned aircraft without human intervention. These limitations make it difficult to deploy large numbers of UAS to support a tactical mission without a large and expensive commitment of human resources to monitor and control each aircraft.”); Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over as applied to Bareddy/Baroudi, in further view of Graca (US 20140148949 A1). Bareddy/Baroudi teaches The information processing device according to claim 1. Bareddy/Baroudi does not teach wherein the plurality of robots include a robot existing in the real world and a robot existing in a virtual world. Graca in analogous art, teaches wherein the plurality of robots include a robot existing in the real world and a robot existing in a virtual world. (Graca [0009] reads “The system includes a robot simulation device having a processor disposed therein and configured for creating a simulation work cell of an operation of a real robot work cell, the robot simulation device configured to communicate with a real robot control system; and a software program executed by at least one of the robot simulation device and the real robot control system for calculating a part tracking offset between the simulation work cell and the real robot work cell.”); It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Bareddy/Baroudi with that of Graca to include a digital twin of the robotic system. This would allow the system a method for improved development and testing. (Graca [0003] reads “Currently, graphical offline programming solutions simplify robotic path teach and paint process development. The solutions are specifically designed to create robotic paths that can be utilized by robot controller application software. These solutions include calibration features in which offset data is calculated and a method is provided to the user to manually shift or offset the taught paths.”); Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over as applied to Bareddy/Baroudi/Valocky, in further view of Beom-Su (US 20130123980 A1). Bareddy/Baroudi teaches The information processing device according to claim 1. Bareddy/Baroudi does not teach wherein the control method includes controlling the operation target so that an image for receiving an instruction for switching from the first robot to the second robot is displayed on a display unit of the terminal device. Beom-Su in analogous art, teaches wherein the control method includes controlling the operation target so that an image for receiving an instruction for switching from the first robot to the second robot is displayed on a display unit of the terminal device. (Beom-Su [0039] reads “The cooperation unit 206 may also request the cooperation operators to permit the control of the other robots, i.e., request the cooperation operator to transfer the control authority for the other robots, according to the manipulation of the operator U1. A control request message generated for this purpose is transferred to the information transmitter/receiver 208. After that, if a control permission verification message is received through the information transmitter/receiver 208, the cooperation unit 206 may notify the operator U1 that the control permission verification message is received, using the video display 216.”); It would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Bareddy/Baroudi with that of Beom-Su to include a method of switching control of many small robots in an environment. This would allow for the system of robots to better act together. (Beom-Su [0008] reads “I in addition, in case of changing some missions or adding new missions during performing missions, there is no systematic method to substitute or cancel existing missions by considering the continuity of the existing missions or the overlapping of mission areas. Therefore, when considering a situation of using the small robot in an actual battlefield, a new scheme, which is capable of improving mission capability of the robot through information sharing between individual robots and the cooperation between operators operating the individual robots, needs to be introduced, but there is no suggestion or proposal on the new scheme.”); Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over as applied to Bareddy/Baroudi, in further view of Suzuki (JP 2002101333 A). Regarding claim 19 Bareddy/Baroudi teaches The information processing device of claim 18. Bareddy/Baroudi does not teach wherein the location thematically associated with the type of the identified one or more objects includes a location where a user can experience specific content. Suzuki in analogous art, teaches wherein the location thematically associated with the type of the identified one or more objects includes a location where a user can experience specific content. (Suzuki [0008] reads “Therefore, according to the first aspect of the present invention, even in a special place such as the universe or the deep sea, a driving device such as a robot installed in the place can be freely controlled by remote control of a user. Instead of letting some users monopolize the remote control of the drive, A service business that allows anyone to easily perform remote operation within the contract time simply by making a reservation for a time contract operation by a large number of users who desire the operation, and also plans and provides such remote operation. Even if the cost is enormous, it can be easily realized if the number of reservations is commensurate with it, and the burden on the user is reduced according to the number of reservations. A remote control device capable of enjoying operation can be provided.”); I would have been obvious to one with ordinary skill in the art, before the effective filing date of the claimed invention to have modified the teachings of Bareddy/Baroudi with that of Suzuki to include a system that would better allow the user to experience different and diverse environments. This would then give the user a better experience. (Suzuki [0004] reads “A first object of the present invention is to control a driving device such as a robot installed in a special place such as the universe or the deep sea by remote control by a user even in a special place. At the same time, rather than letting some users monopolize the remote operation of the drive unit, many users who want to operate it simply reserve an operation for a time contract and anyone can easily operate within the contract time. Can be remotely controlled, and For a service provider that plans and provides such remote control, even if the cost is enormous, it can be easily realized if there is a number of reservations commensurate with it, and the burden on the user is also reduced. An object of the present invention is to provide a remote control device which can be reduced according to the number and can enjoy remote operation at a low cost.”); Other references not Cited Throughout examination other references were found that could read onto the prior art. Though these references were not used in this examination they could be used in future examination and could read on the contents of the current disclosure. These references are, Johnson (Optimized tote recommendation process in warehouse order fulfillment operations, US 20200239233 A1); Wouhaybi (Autonomous machine collaboration, US 20210107151 A1); Mitomo (Mobile body, information processor, mobile body system, information processing method, and information processing program, US 10761533 B2); Response to Arguments Applicant argues < While Bareddy discloses selection based on properties like battery charge 1 or tool requirement, 2 and Baroudi discloses selection factors like "traveled distance," 3 the amended claims require selection "based on information specified by image recognition processing" that "represents one or more objects identified in the images captured by the first robot." This requirement presents a distinction not found in the cited references, which appear to utilize logistical or state-based metrics for robot selection. > [page 9 first paragraph]. The examiner respectfully disagrees. As found in the current application Valocky is no longer relied upon to teach this limitation. An example of Baroudi teaches that a verity of robotic cameras may be placed in an environment and when those cameras detect an event or a collection of objects, such as a group of people, they can send a signal that would call upon a different set of robotic devices. (Baroudi [0172 – 0174] reads “Method 1700 includes detecting 1702 a trigger event associated with a task to be performed. The trigger event may be detected by sensors placed on mobile objects, such as farm equipment, or other equipment placed within an predetermined environment, and configured to detect certain biological or hazardous situations. … Additionally, the sensors may be cameras that detect accidents, or certain situations where humans need assistance, such as overcrowding and the like. For example, a camera may detect an overcrowded area, within a stadium, an event, concert, pilgrimage or the like, and as a result, a few people may get hurt or injured. The camera can detect that people are on the floor and may need assistance or rescue. Accordingly, a trigger event associated with a task to perform a rescue operation, for example, is detected and transmitted.” And “The robotic network may be a network preconfigured to perform certain tasks (e.g., rescue operations) or the broadcast signal could be randomly transmitted to a plurality of robotic networks. The robotic network includes one or more robots, and the broadcast signal includes information associated with the task to be performed, the information including the trigger event (e.g., accident), the type of task to be performed (e.g., rescue)”); Therefore, the combination teaches the claimed invention. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN MARTIN O'MALLEY whose telephone number is (571)272-6228. The examiner can normally be reached Mon - Fri 9 am - 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached at (571) 270 - 5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHN MARTIN O'MALLEY/Examiner, Art Unit 3658 /Ramon A. Mercado/Supervisory Patent Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

Apr 20, 2023
Application Filed
Jun 10, 2025
Non-Final Rejection — §103
Sep 15, 2025
Response Filed
Sep 30, 2025
Final Rejection — §103
Dec 22, 2025
Applicant Interview (Telephonic)
Dec 31, 2025
Examiner Interview Summary
Jan 02, 2026
Request for Continued Examination
Feb 12, 2026
Response after Non-Final Action
Mar 03, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
33%
Grant Probability
0%
With Interview (-33.3%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month