Prosecution Insights
Last updated: April 19, 2026
Application No. 18/074,649

CONTROL SYSTEM, CONTROL METHOD, AND COMPUTER READABLE MEDIUM

Final Rejection §103
Filed
Dec 05, 2022
Examiner
SHARMA, SHIVAM
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Toyota Jidosha Kabushiki Kaisha
OA Round
4 (Final)
44%
Grant Probability
Moderate
5-6
OA Rounds
3y 1m
To Grant
43%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
15 granted / 34 resolved
-7.9% vs TC avg
Minimal -1% lift
Without
With
+-1.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
49 currently pending
Career history
83
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
44.8%
+4.8% vs TC avg
§102
19.4%
-20.6% vs TC avg
§112
24.0%
-16.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 34 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This action is in reply to the amendments filed on 07/31/2025 for Application No. 18/074,649 Claims 1, 3, 6, 7, 9, 12, 13, 15, 18 and 19 – 24 are currently pending and have been examined. Claims 1, 7 and 13 have been amended This action is made FINAL Information Disclosure Statement The information disclosure statements filed 07/18/2025 have been received and considered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 6, 7, 9, 12, 13, 15, 18 and 19 – 24 are rejected under 35 U.S.C. 103 as being unpatentable over Takai et al. (US 20210157318 A1) in view of DAISUKE et al. (JP 2018-195087 A), further in view of Wang et al. (US 11209796 B2). Regarding claim 1, Takai teaches, a control system comprising: a processor configured to: (Takai: Paragraph 0058: “The server 500 includes, as its main components, an arithmetic processing unit 510, a communication unit 520, and a storage unit 530. The arithmetic processing unit 510 is an information processing apparatus including an arithmetic unit such as a CPU.”) perform system control for controlling a system including a plurality of cameras; and (Takai: Paragraph 0047: “A robot camera 180 is also provided on the upper surface of the housing part 150. The robot camera 180 includes an objective lens that is opposed to an area whose images are to be captured, an image sensor for generating image data of images to be captured and the like.”; Paragraph 0067: “Next, a system overview of the camera 600 will be described. The camera 600 includes a first camera 600A and a second camera 600B. In the following description, it is assumed that the camera 600 is a collective term for the first camera 600A and the second camera 600B. The camera 600, which can communicate with the server 500 by wireless communication, transmits the image data generated by capturing images to the server 500. The camera 600 transmits, for example, the image data of 30 frames per second (30 fps) to the server. The camera 600 may include another camera in addition to the first camera 600A and the second camera 600B.”) perform a group classification process for recognizing a feature of a person photographed by one or more cameras of the plurality of cameras and classifying the person into a predetermined first group or a predetermined second group based on the feature, wherein (Takai: Paragraph 0007: “The classifying unit classifies the person into a predetermined first group or a predetermined second group based on the features.”; Paragraph 0009 – 0011: “The aforementioned control system may further include a camera configured to capture images of the surrounding environment and generate image data, and the feature detection unit may detect the features of the person from the image data generated by the camera. Accordingly, the control system is able to detect the person who is present in the vicinity of the mobile robot from the image data. In the aforementioned control system, the camera may be provided in a position that is separated from the mobile robot so as to capture images of the surrounding environment. Accordingly, the control system is able to objectively capture images of the area in the vicinity of the mobile robot, whereby it is possible to reduce blind spots. In the aforementioned control system, the classifying unit may classify the person in accordance with features of clothing of the person. Accordingly, the control system is able to easily classify persons.”) the system control includes control of a traveling mobile robot configured to autonomously move in a predetermined area inside a facility, (Takai: Abstract: “A control system controls an operation mode of a mobile robot that autonomously moves in a predetermined area”) a first plurality of cameras of the plurality of cameras are installed in the facility at a position away from a surface on which the mobile robot travels so as to photograph a periphery of the traveling mobile robot, and (Takai: Paragraph 0067: “Next, a system overview of the camera 600 will be described. The camera 600 includes a first camera 600A and a second camera 600B. In the following description, it is assumed that the camera 600 is a collective term for the first camera 600A and the second camera 600B. The camera 600, which can communicate with the server 500 by wireless communication, transmits the image data generated by capturing images to the server 500. The camera 600 transmits, for example, the image data of 30 frames per second (30 fps) to the server. The camera 600 may include another camera in addition to the first camera 600A and the second camera 600B.”) a second plurality of cameras of the plurality of cameras are installed on the mobile robot, and (Takai: Paragraph 0043: “A front-back distance sensor 152 is provided in each of the upper part of the accommodation room door 151 of the housing part 150 and on the surface that is opposite to the surface where the accommodation room door 151 is provided (i.e., each of the surfaces of the mobile robot 100 in the front-back direction). The front-back distance sensors 152 detect an object in the vicinity of the mobile robot 100, thereby being able to detect the distance between the mobile robot 100 and this object. The front-back distance sensors 152 measure, for example, the distance between the mobile robot 100 and the object included in the image data of images captured using a stereo camera and an infrared scanner.”; Paragraph 0047: “A robot camera 180 is also provided on the upper surface of the housing part 150. The robot camera 180 includes an objective lens that is opposed to an area whose images are to be captured, an image sensor for generating image data of images to be captured and the like.”) in the system control, the processor initially performs the group classification process for recognizing a feature of the person photographed by one or more cameras of the first plurality of cameras or one or more cameras of the second plurality of cameras and classifies the person into a predetermined first group or a predetermined second group based on the feature, (Takai: Paragraph 0009: “The aforementioned control system may further include a camera configured to capture images of the surrounding environment and generate image data, and the feature detection unit may detect the features of the person from the image data generated by the camera. Accordingly, the control system is able to detect the person who is present in the vicinity of the mobile robot from the image data.”; Paragraph 0062: “The system controller 513 receives the information regarding the results of the classification from the classifying unit 512 and controls the operation of the mobile robot 100 from the received information. The system controller 513 selects a first operation mode when, for example, a person who belongs to the first group is present in the vicinity of the mobile robot 100. Further, the system controller 513 selects a second operation mode different from the first operation mode when, for example, a person who belongs to the first group is not present in the vicinity of the mobile robot 100. Then the system controller 513 controls the mobile robot 100 by the first operation mode or the second operation mode that has been selected as above. Note that specific examples of the first operation mode and the second operation mode will be described later.”) the processor selects an initial operation mode being a first operation mode in a case where there is a person belonging to the first group, or a second operation mode different from the first operation mode in a case where there is no person belonging to the first group, (Takai: Paragraph 0017: “A control method according to one aspect of the present disclosure is a control method for controlling an operation mode of a mobile robot that autonomously moves in a predetermined area, the control method including: a feature detection step for detecting features of a person who is present in the vicinity of the mobile robot; a classification step for classifying the person into a predetermined first group or a predetermined second group based on the features; and a control step for selecting a first operation mode when the person who belongs to the first group is present in the vicinity of the mobile robot and selecting a second operation mode that is different from the first operation mode when the person who belongs to the first group is not present in the vicinity of the mobile robot, thereby controlling the mobile robot.”) after selecting the initial operation mode: in a case where the processor selects the first operation mode, the processor performs subsequent group classification processes by the first plurality of cameras and the second plurality of cameras, and (Takai: Paragraph 0034: “The control system 10 includes, as its main components, a mobile robot 100, a server 500, and a camera 600.”; Paragraph 0059 – 0060: “The feature detection unit 511 receives image data from the camera 600 and processes the received image data, thereby detecting the features of persons who are present in the vicinity of the mobile robot. The feature detection unit 511 supplies information regarding features of the detected persons to the classifying unit 512. The classifying unit 512 receives information regarding the features of the persons detected by the feature detection unit 511 and classifies the persons into a plurality of predetermined groups.”; Paragraph 0096: “Further, the control system 10 may detect the person who is present in the vicinity of the mobile robot 100 by acquiring image data from the camera 600 and a camera included in the mobile robot 100. Further, the control system 10 may detect the person who is present in the vicinity of the mobile robot 100 using the camera included in the mobile robot 100 in place of the camera 600.”) In sum, Takai teaches a control system comprising: a processor configured to: perform system control for controlling a system including a plurality of cameras; and perform a group classification process for recognizing a feature of a person photographed by one or more cameras of the plurality of cameras and classifying the person into a predetermined first group or a predetermined second group based on the feature, wherein the system control includes control of a traveling mobile robot configured to autonomously move in a predetermined area inside a facility, a first plurality of cameras of the plurality of cameras are installed in the facility at a position away from a surface on which the mobile robot travels so as to photograph a periphery of the traveling mobile robot, and a second plurality of cameras of the plurality of cameras are installed on the mobile robot, and in the system control, the processor initially performs the group classification process for recognizing a feature of the person photographed by one or more cameras of the first plurality of cameras or one or more cameras of the second plurality of cameras and classifies the person into a predetermined first group or a predetermined second group based on the feature, the processor selects an initial operation mode being a first operation mode in a case where there is a person belonging to the first group, or a second operation mode different from the first operation mode in a case where there is no person belonging to the first group, after selecting the initial operation mode: in a case where the processor selects the first operation mode, the processor performs subsequent group classification processes by the first plurality of cameras and the second plurality of cameras. Takai however does not teach in a case where the processor selects the second operation mode, operations of the first plurality of cameras except that of a first camera of the first plurality of cameras are stopped, and the processor performs subsequent group classification processes so as to make the number of cameras used as an information source among the plurality of cameras less than the number of cameras that are used in a case where the first operation mode is selected whereas Daisuke does. Daisuke teaches in a case where the processor selects the second operation mode, operations of the first plurality of cameras except that of a first camera of the first plurality of cameras are stopped, and the processor performs subsequent group classification processes so as to make the number of cameras used as an information source for the group classification processes among the plurality of cameras less than the number of cameras that are used in a case where the first operation mode is selected, and (Daisuke: Paragraph 0035 – 0036: “When a face image of a person is input from any one of the cameras 21, 22, the controller 30 performs an authentication process of authenticating whether or not the person is a resident based on the input face image of the person . Specifically, the controller 30 determines (authenticates) whether or not the inputted face image matches any of the resident registration face images stored in the face image storage unit 31. The surveillance cameras 23 are connected to the controller 30. Based on the result of the authentication process, the controller 30 turns on each monitor camera 23 in an ON state or an OFF state. In the ON state of each monitoring camera 23, monitoring processing by each monitoring camera 23 is performed. In this case, the monitoring video is sequentially inputted from the monitoring cameras 23 to the controller 30.”: Paragraph 0052: “In step S 12, based on the acquired face image of the person, it is determined whether or not a person entering the private road 18 from any one of the doorways 15 (16) is a resident of the house 13. This determination is made based on whether or not the face image of the acquired person coincides with any of the resident registration face images stored in the face image storage section 31. If the person entering the private road 18 from the entrance 15 (16) is a person other than a resident, such as a suspicious person, the process proceeds to step S18, and a non-resident flag is set. On the other hand, if the person entering the private route 18 from the gateway 15 (16) is a resident, the process proceeds to step S13.”; Paragraph 0060: “After setting the resident flag or after setting the non-resident flag in step S18, the process proceeds to step S17 and the monitoring process is executed. Hereinafter, this monitoring process will be described with reference to FIG. 5. FIG. 5 is a flowchart showing the monitoring process. After the monitoring process is executed, this process (monitoring control process) is terminated.”; Paragraphs 0062 – 0064: “In step S32, it is determined whether or not the resident flag is set. That is, here, it is determined whether or not the person entering the private road 18 is a resident. If the person entering the private road 18 is not a resident, that is, if the person entering the private road 18 is a non-resident, the process proceeds to step S37. In step S37, an ON signal is output to each monitoring camera 23, and these monitoring cameras 23 are turned ON. As a result, monitoring processing by each monitoring camera 23 is performed. In the following step S 38, the monitor image input from the monitoring camera 23 is transmitted from the communication unit 32 to the management computer of the monitoring center 38. This makes it possible for the monitoring center 38 to monitor non-residents such as suspicious persons who stay at the private road 18.”; Paragraph 0077: “Even when the resident is in the moving area, if the staying time in the area exceeds the predetermined time, there is a possibility that the resident is taking suspicious behavior. Therefore, in the above embodiment, in such a case, the monitoring processing by the monitoring camera 23 is executed. As a result, it is possible to further improve the security performance”; Paragraph 0075: “When it is determined that a resident (equivalent to a permitter) who resides in the house 13 is about to enter the private road 18 from the entrances 15, 16 or from the side of the house 13, the monitoring processing by the monitoring camera 23 is controlled not to be executed. In this case, unnecessary monitoring of residents who do not need to monitor them can be avoided, so that it is possible to improve energy saving performance when monitoring the private road 18”. , Supplemental Note: the first operation mode is interpreted to when an non-resident person is in the passage way, then all monitoring cameras are turned on. A second operation mode is interpreted as when a resident is about to enter the facility and less cameras are used for monitoring) Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Takai with the teachings of Daisuke with a reasonable expectation of success. Both Takai and Daisuke teach surveillance systems able to detect people in their respective facilities. Daisuke differs in the ability to stop monitoring from all cameras if in a certain operation mode, as to conserve energy of the system. One with knowledge in the art would find this function as a use of a known technique to a known device ready for improvement to yield predictable results to be used on combination with surveillance system of Takai. For example, Daisuke stops the monitoring of cameras when an unidentified person is no longer present, combining this with Takai allows the surveillance system to not operate the cameras away from the entrance when no one is present. This conserves energy from being used from the other cameras when they are not needed. Takai in view of Daisuke however still do not teach in the second operation mode, the processor selects which camera serves as the first camera according to a traveling position of the mobile robot whereas Wang does. Wang teaches in the second operation mode, the processor selects which camera serves as the first camera and whether or not the second plurality of cameras are used as the information source for group classification processes according to a traveling position of the mobile robot. (Wang: Col. 3, line 66 – Col. 4, line 1: “Alternatively, the one or more cameras 1002 may be embedded in the one or more robotic surveillance devices 1008, e.g., a drone or a motorized device.”; Col. 11, lines 37 – 44: “FIG. 6 illustrates a flow chart of an exemplary working process 6000 of the surveillance system in FIG. 1 according to some embodiments of the present disclosure. In the illustrated embodiments, one or more of the robotic surveillance devices 1008 may include a manned platform, and the one or more robotic surveillance device 1008 may go to the desired location by itself or carrying a security officer.”; Col. 11, line 44 – Col. 12, line 8: “At block 6002, the analyzing unit 1004 of the surveillance system 1000 may obtain video data. For example, video stream captured by the cameras may be imported into the analyzing unit 1004. At block 6004, the analyzing unit 1004 may analyze the video data. For example, the analyzing unit 1004 may analyze video clips using suitable identity recognition algorithm and activity recognition algorithm. At block 6006, the analyzing unit 1004 may determine whether there is a trigger event based on the video data. For example, the analyze unit 1004 may use face recognition algorithm to determine the identity of a person occurring in the video and determine if the person is a suspect based on the person's identity. If so, there is a trigger event. In other examples, the analyzing unit 1004 may use activity recognition algorithm to detect a person's behavior to determine if the behavior is suspicious. If so, there is a trigger event. In yet other examples, the analyzing unit 1004 may combine the identity determination and activity determination to determine if there is a trigger event. If the analyzing unit 1004 determines that there is no trigger event, the working process 6000 returns to block 6004, and the analyzing unit 1004 may continue to analyze more video data. If the analyzing unit 1004 determines that there is a trigger event, the working process 6000 goes to block 6008 and at block 6008 the decision device 1006 of the surveillance system 1000 may determine an optimal robotic surveillance device 1008. An optimal robotic surveillance device 1008 may be the robotic surveillance device 1008 closest to a desired location indicted by the trigger event. The decision device 1006 of the surveillance system 1000 may connect to the optimal robotic surveillance device 1008 via a communication channel, e.g., Wi-Fi.”; Col. 12, lines 23 – 30: “At block 6014, the decision unit 1006 of the surveillance system 1000 may instruct the optimal robotic surveillance device 1008 to go to the desired location and perform responding actions autonomously. For example, upon receiving the instruction, the optimal robotic surveillance device 1008 may plan its own path and determine to conduct the responding actions such as video or picture recording or other interference strategies autonomously.”, Supplemental Note: as shown in Fig. 6, the system utilizes a robot as a first camera to travel to a target location) PNG media_image1.png 748 569 media_image1.png Greyscale Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Takai, as modified, with the teachings of Wang with a reasonable expectation of success. Both Takai and Wang teach a surveillance system including cameras and an autonomous robot to monitor a facility. Wang further teaches the ability of an operation mode that identifies trigger conditions, and when a trigger condition is identified, to send the optimal robotic surveillance device to that location. The robot device of Wang is able to autonomously travel and consists of a camera, like the autonomous robot as taught by Takai. One with knowledge in the art would find combining this function of Wang with the system of Takai to be obvious to try. For example, if the monitoring cameras of Takai placed throughout the floor capture a trigger condition, the autonomous robot can be sent to that location for further assistance. This improves the surveillance system of Takai as the autonomous robot can aid in situations by being another angle of surveillance for the control system to better direct its next action accordingly. There also can be blind spots not fully viewable by the monitoring cameras and the autonomous robot with its camera can be used for a better assessment. Regarding claim 3, Takai, as modified, does not teach wherein the first camera includes a camera provided at a position for monitoring a security gate in the facility whereas Daisuke does. Daisuke teaches wherein the first camera includes a camera provided at a position for monitoring a security gate in the facility (Daisuke: Paragraph 0008: “According to the present invention, when an authorized person is about to enter a passage area or entered, monitor by the monitoring unit is not executed. In this case, it is possible to avoid unnecessarily monitoring the authorized person who does not need to monitor it, so that it is possible to improve the energy saving performance, etc. in monitoring the passage area”: Paragraph 0028: “In the site 12, a monitoring camera 23 is provided at a plurality of locations. The monitoring camera 23 is a monitoring means for monitoring a person entering the driveway 18, and monitors a person with a predetermined range in the driveway 18 as a monitoring target. In this monitoring system, these monitoring cameras 23 monitor people throughout the driveway 18 as monitoring targets. In addition, the monitoring camera 23 is capable of switching ON / OFF of the power supply, and the monitoring processing by the monitoring camera 23 is executed in the power ON state, and the monitoring processing is not executed in the power OFF state. Further, each monitoring camera 23 is provided by using, for example, a wall 11 or a wall of a house 13.”, Supplemental Note: the monitoring cameras are used to point at the passage area or the private road, which is equivalent to the claimed function) Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Takai with the teachings of Daisuke with a reasonable expectation of success. Both Takai and Daisuke teach surveillance systems able to detect people in their respective facilities. Takai teaches the security system to be implemented on a floor level basis of a building, citing that the cameras placed around the floor are in passageways and Infront of the elevators. Daisuke teaches the facility to be at the entrance of a building, such as a driveway, where a monitoring camera is placed for surveillance, thus one with knowledge in the art would find both of these placements to be a simple substitution. For example, a person coming to a certain floor utilizes elevators or stairs, both of which are captured by camera placement of Takai and are interpreted as the entrance or a secure gate between the outside and the target floor. This is the same as the entrance as taught by Daisuke which separates the outside with the inside of the facility. In both prior art, a camera is taught to be placed in these locations. Regarding claim 6, Takai teaches wherein the feature of the person photographed includes a feature of clothes the person photographed is wearing or a predetermined article the person photographed is carrying or not carrying, and in the group classification processes, the person is classified on a basis of whether the feature of the clothes the person photographed is wearing is that of clothing that a person belonging to the second group wears, or whether or not the person carries the predetermined article that the person belonging to the second group carries. (Takai: Paragraph 0007: “The feature detection unit detects features of a person who is present in the vicinity of the mobile robot. The classifying unit classifies the person into a predetermined first group or a predetermined second group based on the features. The controller selects a first operation mode when the person who belongs to the first group is present in the vicinity of the mobile robot and selects a second operation mode that is different from the first operation mode when the person who belongs to the first group is not present in the vicinity of the mobile robot, thereby controlling the mobile robot.”; Paragraphs 0011 – 0012: “In the aforementioned control system, the classifying unit may classify the person in accordance with features of clothing of the person. Accordingly, the control system is able to easily classify persons. In the aforementioned control system, the feature detection unit may detect a color tone in a predetermined part of the clothing of the person, and the classifying unit may classify the person in accordance with the color tone. Accordingly, the control system is able to easily classify persons.”) Regarding claim 7, Takai teaches a control method comprising: performing system control for controlling a system including a plurality of cameras; and (Takai: Paragraph 0047: “A robot camera 180 is also provided on the upper surface of the housing part 150. The robot camera 180 includes an objective lens that is opposed to an area whose images are to be captured, an image sensor for generating image data of images to be captured and the like.”; Paragraph 0067: “Next, a system overview of the camera 600 will be described. The camera 600 includes a first camera 600A and a second camera 600B. In the following description, it is assumed that the camera 600 is a collective term for the first camera 600A and the second camera 600B. The camera 600, which can communicate with the server 500 by wireless communication, transmits the image data generated by capturing images to the server 500. The camera 600 transmits, for example, the image data of 30 frames per second (30 fps) to the server. The camera 600 may include another camera in addition to the first camera 600A and the second camera 600B.”) performing a group classification process for recognizing a feature of a person photographed by one or more cameras of the plurality of cameras and classifying the person into a predetermined first group or a predetermined second group based on the feature, wherein (Takai: Paragraph 0007: “The classifying unit classifies the person into a predetermined first group or a predetermined second group based on the features.”; Paragraph 0009 – 0011: “The aforementioned control system may further include a camera configured to capture images of the surrounding environment and generate image data, and the feature detection unit may detect the features of the person from the image data generated by the camera. Accordingly, the control system is able to detect the person who is present in the vicinity of the mobile robot from the image data. In the aforementioned control system, the camera may be provided in a position that is separated from the mobile robot so as to capture images of the surrounding environment. Accordingly, the control system is able to objectively capture images of the area in the vicinity of the mobile robot, whereby it is possible to reduce blind spots. In the aforementioned control system, the classifying unit may classify the person in accordance with features of clothing of the person. Accordingly, the control system is able to easily classify persons.”)d the system control includes control of a traveling mobile robot configured to autonomously move in a predetermined area inside a facility, (Takai: Abstract: “A control system controls an operation mode of a mobile robot that autonomously moves in a predetermined area”) a first plurality of cameras of the plurality of cameras are installed in the facility at a position away from a surface on which the mobile robot travels so as to photograph a periphery of the traveling mobile robot, and (Takai: Paragraph 0067: “Next, a system overview of the camera 600 will be described. The camera 600 includes a first camera 600A and a second camera 600B. In the following description, it is assumed that the camera 600 is a collective term for the first camera 600A and the second camera 600B. The camera 600, which can communicate with the server 500 by wireless communication, transmits the image data generated by capturing images to the server 500. The camera 600 transmits, for example, the image data of 30 frames per second (30 fps) to the server. The camera 600 may include another camera in addition to the first camera 600A and the second camera 600B.”) a second plurality of cameras of the plurality of cameras are installed on the mobile robot, and (Takai: Paragraph 0043: “A front-back distance sensor 152 is provided in each of the upper part of the accommodation room door 151 of the housing part 150 and on the surface that is opposite to the surface where the accommodation room door 151 is provided (i.e., each of the surfaces of the mobile robot 100 in the front-back direction). The front-back distance sensors 152 detect an object in the vicinity of the mobile robot 100, thereby being able to detect the distance between the mobile robot 100 and this object. The front-back distance sensors 152 measure, for example, the distance between the mobile robot 100 and the object included in the image data of images captured using a stereo camera and an infrared scanner.”; Paragraph 0047: “A robot camera 180 is also provided on the upper surface of the housing part 150. The robot camera 180 includes an objective lens that is opposed to an area whose images are to be captured, an image sensor for generating image data of images to be captured and the like.”) in the system control, the group classification process is initially performed for recognizing a feature of the person photographed by one or more cameras of the first plurality of cameras or one or more cameras of the second plurality of cameras and classifies the person into a predetermined first group or a predetermined second group based on the feature, (Takai: Paragraph 0009: “The aforementioned control system may further include a camera configured to capture images of the surrounding environment and generate image data, and the feature detection unit may detect the features of the person from the image data generated by the camera. Accordingly, the control system is able to detect the person who is present in the vicinity of the mobile robot from the image data.”; Paragraph 0062: “The system controller 513 receives the information regarding the results of the classification from the classifying unit 512 and controls the operation of the mobile robot 100 from the received information. The system controller 513 selects a first operation mode when, for example, a person who belongs to the first group is present in the vicinity of the mobile robot 100. Further, the system controller 513 selects a second operation mode different from the first operation mode when, for example, a person who belongs to the first group is not present in the vicinity of the mobile robot 100. Then the system controller 513 controls the mobile robot 100 by the first operation mode or the second operation mode that has been selected as above. Note that specific examples of the first operation mode and the second operation mode will be described later.”) a first operation mode is selected as an initial operation mode in a case where there is a person belonging to the first group or a second operation mode different from the first operation mode is selected in a case where there is no person belonging to the first group, (Takai: Paragraph 0017: “A control method according to one aspect of the present disclosure is a control method for controlling an operation mode of a mobile robot that autonomously moves in a predetermined area, the control method including: a feature detection step for detecting features of a person who is present in the vicinity of the mobile robot; a classification step for classifying the person into a predetermined first group or a predetermined second group based on the features; and a control step for selecting a first operation mode when the person who belongs to the first group is present in the vicinity of the mobile robot and selecting a second operation mode that is different from the first operation mode when the person who belongs to the first group is not present in the vicinity of the mobile robot, thereby controlling the mobile robot.”) after selecting the initial operation mode: in a case where the first operation mode is selected, performing subsequent group classification processes by the first plurality of cameras and the second plurality of cameras, and (Takai: Paragraph 0034: “The control system 10 includes, as its main components, a mobile robot 100, a server 500, and a camera 600.”; Paragraph 0059 – 0060: “The feature detection unit 511 receives image data from the camera 600 and processes the received image data, thereby detecting the features of persons who are present in the vicinity of the mobile robot. The feature detection unit 511 supplies information regarding features of the detected persons to the classifying unit 512. The classifying unit 512 receives information regarding the features of the persons detected by the feature detection unit 511 and classifies the persons into a plurality of predetermined groups.”; Paragraph 0096: “Further, the control system 10 may detect the person who is present in the vicinity of the mobile robot 100 by acquiring image data from the camera 600 and a camera included in the mobile robot 100. Further, the control system 10 may detect the person who is present in the vicinity of the mobile robot 100 using the camera included in the mobile robot 100 in place of the camera 600.”) In sum, Takai teaches a control method comprising: performing system control for controlling a system including a plurality of cameras; and performing a group classification process for recognizing a feature of a person photographed by one or more cameras of the plurality of cameras and classifying the person into a predetermined first group or a predetermined second group based on the feature, wherein the system control includes control of a traveling mobile robot configured to autonomously move in a predetermined area inside a facility, a first plurality of cameras of the plurality of cameras are installed in the facility at a position away from a surface on which the mobile robot travels so as to photograph a periphery of the traveling mobile robot, and a second plurality of cameras of the plurality of cameras are installed on the mobile robot, and in the system control, the group classification process is initially performed for recognizing a feature of the person photographed by one or more cameras of the first plurality of cameras or one or more cameras of the second plurality of cameras and classifies the person into a predetermined first group or a predetermined second group based on the feature, a first operation mode is selected as an initial operation mode in a case where there is a person belonging to the first group or a second operation mode different from the first operation mode is selected in a case where there is no person belonging to the first group, after selecting the initial operation mode: in a case where the first operation mode is selected, performing subsequent group classification processes by the first plurality of cameras and the second plurality of cameras. Takai however does not teach in a case where the second operation mode is selected, operations of the first plurality of cameras except that of a first camera of the first plurality of cameras are stopped, and subsequent group classification processes are performed so that the number of cameras used as an information source among the plurality of cameras is made less than the number of cameras that are used in a case where the first operation mode is selected whereas Daisuke does. Daisuke teaches in a case where the second operation mode is selected, operations of the first plurality of cameras except that of a first camera of the first plurality of cameras are stopped, and subsequent group classification processes are performed so that the number of cameras used as an information source for the group classification processes among the plurality of cameras is made less than the number of cameras that are used in a case where the first operation mode is selected, and (Daisuke: Paragraph 0035 – 0036: “When a face image of a person is input from any one of the cameras 21, 22, the controller 30 performs an authentication process of authenticating whether or not the person is a resident based on the input face image of the person . Specifically, the controller 30 determines (authenticates) whether or not the inputted face image matches any of the resident registration face images stored in the face image storage unit 31. The surveillance cameras 23 are connected to the controller 30. Based on the result of the authentication process, the controller 30 turns on each monitor camera 23 in an ON state or an OFF state. In the ON state of each monitoring camera 23, monitoring processing by each monitoring camera 23 is performed. In this case, the monitoring video is sequentially inputted from the monitoring cameras 23 to the controller 30.”: Paragraph 0052: “In step S 12, based on the acquired face image of the person, it is determined whether or not a person entering the private road 18 from any one of the doorways 15 (16) is a resident of the house 13. This determination is made based on whether or not the face image of the acquired person coincides with any of the resident registration face images stored in the face image storage section 31. If the person entering the private road 18 from the entrance 15 (16) is a person other than a resident, such as a suspicious person, the process proceeds to step S18, and a non-resident flag is set. On the other hand, if the person entering the private route 18 from the gateway 15 (16) is a resident, the process proceeds to step S13.”; Paragraph 0060: “After setting the resident flag or after setting the non-resident flag in step S18, the process proceeds to step S17 and the monitoring process is executed. Hereinafter, this monitoring process will be described with reference to FIG. 5. FIG. 5 is a flowchart showing the monitoring process. After the monitoring process is executed, this process (monitoring control process) is terminated.”; Paragraphs 0062 – 0064: “In step S32, it is determined whether or not the resident flag is set. That is, here, it is determined whether or not the person entering the private road 18 is a resident. If the person entering the private road 18 is not a resident, that is, if the person entering the private road 18 is a non-resident, the process proceeds to step S37. In step S37, an ON signal is output to each monitoring camera 23, and these monitoring cameras 23 are turned ON. As a result, monitoring processing by each monitoring camera 23 is performed. In the following step S 38, the monitor image input from the monitoring camera 23 is transmitted from the communication unit 32 to the management computer of the monitoring center 38. This makes it possible for the monitoring center 38 to monitor non-residents such as suspicious persons who stay at the private road 18.”; Paragraph 0077: “Even when the resident is in the moving area, if the staying time in the area exceeds the predetermined time, there is a possibility that the resident is taking suspicious behavior. Therefore, in the above embodiment, in such a case, the monitoring processing by the monitoring camera 23 is executed. As a result, it is possible to further improve the security performance”; Paragraph 0075: “When it is determined that a resident (equivalent to a permitter) who resides in the house 13 is about to enter the private road 18 from the entrances 15, 16 or from the side of the house 13, the monitoring processing by the monitoring camera 23 is controlled not to be executed. In this case, unnecessary monitoring of residents who do not need to monitor them can be avoided, so that it is possible to improve energy saving performance when monitoring the private road 18”. , Supplemental Note: the first operation mode is interpreted to when an non-resident person is in the passage way, then all monitoring cameras are turned on. A second operation mode is interpreted as when a resident is about to enter the facility and less cameras are used for monitoring) Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Takai with the teachings of Daisuke with a reasonable expectation of success. Please refer to the rejection of claim 1 as both claims state the same function and therefore rejected under the same pretenses. Takai in view of Daisuke however still do not teach in the second operation mode, the processor selects which camera serves as the first camera according to a traveling position of the mobile robot whereas Wang does. Wang teaches in the second operation mode, the camera which serves as the first camera is selected and whether or not the second plurality of cameras are used as the information source for group classification processes are determined according to a traveling position of the mobile robot. (Wang: Col. 3, line 66 – Col. 4, line 1: “Alternatively, the one or more cameras 1002 may be embedded in the one or more robotic surveillance devices 1008, e.g., a drone or a motorized device.”; Col. 11, lines 37 – 44: “FIG. 6 illustrates a flow chart of an exemplary working process 6000 of the surveillance system in FIG. 1 according to some embodiments of the present disclosure. In the illustrated embodiments, one or more of the robotic surveillance devices 1008 may include a manned platform, and the one or more robotic surveillance device 1008 may go to the desired location by itself or carrying a security officer.”; Col. 11, line 44 – Col. 12, line 8: “At block 6002, the analyzing unit 1004 of the surveillance system 1000 may obtain video data. For example, video stream captured by the cameras may be imported into the analyzing unit 1004. At block 6004, the analyzing unit 1004 may analyze the video data. For example, the analyzing unit 1004 may analyze video clips using suitable identity recognition algorithm and activity recognition algorithm. At block 6006, the analyzing unit 1004 may determine whether there is a trigger event based on the video data. For example, the analyze unit 1004 may use face recognition algorithm to determine the identity of a person occurring in the video and determine if the person is a suspect based on the person's identity. If so, there is a trigger event. In other examples, the analyzing unit 1004 may use activity recognition algorithm to detect a person's behavior to determine if the behavior is suspicious. If so, there is a trigger event. In yet other examples, the analyzing unit 1004 may combine the identity determination and activity determination to determine if there is a trigger event. If the analyzing unit 1004 determines that there is no trigger event, the working process 6000 returns to block 6004, and the analyzing unit 1004 may continue to analyze more video data. If the analyzing unit 1004 determines that there is a trigger event, the working process 6000 goes to block 6008 and at block 6008 the decision device 1006 of the surveillance system 1000 may determine an optimal robotic surveillance device 1008. An optimal robotic surveillance device 1008 may be the robotic surveillance device 1008 closest to a desired location indicted by the trigger event. The decision device 1006 of the surveillance system 1000 may connect to the optimal robotic surveillance device 1008 via a communication channel, e.g., Wi-Fi.”; Col. 12, lines 23 – 30: “At block 6014, the decision unit 1006 of the surveillance system 1000 may instruct the optimal robotic surveillance device 1008 to go to the desired location and perform responding actions autonomously. For example, upon receiving the instruction, the optimal robotic surveillance device 1008 may plan its own path and determine to conduct the responding actions such as video or picture recording or other interference strategies autonomously.”, Supplemental Note: as shown in Fig. 6, the system utilizes a robot as a first camera to travel to a target location) PNG media_image1.png 748 569 media_image1.png Greyscale Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Takai, as modified, with the teachings of Wang with a reasonable expectation of success. Please refer to the rejection of claim 1 as both claims state the same function and therefore rejected under the same pretenses. Regarding claim 9, Takai, as modified, does not teach wherein the first camera includes a camera provided at a position for monitoring a security gate in the facility whereas Daisuke does. Daisuke teaches wherein the first camera includes a camera provided at a position for monitoring a security gate in the facility. (Daisuke: Paragraph 0008: “According to the present invention, when an authorized person is about to enter a passage area or entered, monitor by the monitoring unit is not executed. In this case, it is possible to avoid unnecessarily monitoring the authorized person who does not need to monitor it, so that it is possible to improve the energy saving performance, etc. in monitoring the passage area”: Paragraph 0028: “In the site 12, a monitoring camera 23 is provided at a plurality of locations. The monitoring camera 23 is a monitoring means for monitoring a person entering the driveway 18, and monitors a person with a predetermined range in the driveway 18 as a monitoring target. In this monitoring system, these monitoring cameras 23 monitor people throughout the driveway 18 as monitoring targets. In addition, the monitoring camera 23 is capable of switching ON /
Read full office action

Prosecution Timeline

Dec 05, 2022
Application Filed
Sep 18, 2024
Non-Final Rejection — §103
Nov 13, 2024
Examiner Interview Summary
Dec 13, 2024
Response Filed
Mar 03, 2025
Final Rejection — §103
Apr 15, 2025
Applicant Interview (Telephonic)
Apr 15, 2025
Examiner Interview Summary
May 02, 2025
Request for Continued Examination
May 06, 2025
Response after Non-Final Action
May 28, 2025
Non-Final Rejection — §103
Jul 31, 2025
Response Filed
Oct 31, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12491869
METHOD FOR CONTROLLING VEHICLE, VEHICLE AND ELECTRONIC DEVICE
2y 5m to grant Granted Dec 09, 2025
Patent 12485897
METHOD FOR DETERMINING PASSAGE OF AUTONOMOUS VEHICLE AND RELATED DEVICE
2y 5m to grant Granted Dec 02, 2025
Patent 12434722
METHODS AND SYSTEMS FOR LATERAL CONTROL OF A VEHICLE
2y 5m to grant Granted Oct 07, 2025
Patent 12427919
VEHICLE BLIND-SPOT REDUCTION DEVICE
2y 5m to grant Granted Sep 30, 2025
Patent 12406535
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
44%
Grant Probability
43%
With Interview (-1.3%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 34 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month