Prosecution Insights
Last updated: April 19, 2026
Application No. 18/253,214

AUTONOMOUS MOBILE BODY, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

Non-Final OA §103
Filed
May 17, 2023
Examiner
NIEVES FLORES, NEIT JOSAFAT
Art Unit
3664
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Sony Group Corporation
OA Round
3 (Non-Final)
43%
Grant Probability
Moderate
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 43% of resolved cases
43%
Career Allow Rate
3 granted / 7 resolved
-9.1% vs TC avg
Strong +80% interview lift
Without
With
+80.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
21 currently pending
Career history
28
Total Applications
across all art units

Statute-Specific Performance

§101
19.3%
-20.7% vs TC avg
§103
38.5%
-1.5% vs TC avg
§102
13.3%
-26.7% vs TC avg
§112
24.3%
-15.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office Action is in response to Applicant’s RCE and amendments and remarks filed on 09/30/2025. The Applicant has amended claims 1, 18, 19, and 20, and cancelled original claims 12, 17 and 22 without prejudice or disclaimer. No new claims or new matter have been introduced. Claims 1 – 11, 13 – 16, and 18 – 21 are currently pending and are addressed below. Claims 12, 17, and 22 have been cancelled and will not be considered. Examiner notes that the fundamentals of the rejection are based on the broadest reasonable interpretation of the claim language. Any reference to specific figures, column, line and paragraphs should not be considered limiting in any way, the entire cited reference, as well as any secondary teaching reference(s), are considered to provide relevant disclosure relating to the claimed invention. Applicant is kindly invited to consider the reference as a whole. References are to be interpreted as by one of ordinary skill in the art rather than as by a novice. See MPEP 2141. Therefore, the relevant inquiry when interpreting a reference is not what the reference expressly discloses on its face but what the reference would teach or suggest to one of ordinary skill in the art. Information Disclosure Statement The information disclosure statement (IDS) submitted on 01/25/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment The amendment filed on 09/30/2025 has been entered. Claims 1 – 11, 13 – 16, and 18 – 21 remain pending in the application. Applicant’s amendments have overcome each and every 35 U.S.C. 112 rejections previously set forth in the Final Office Action mailed 07/02/2025. Reply to Applicant’s Remarks Applicant’s remarks filed 09/30/2025 have been fully considered and are addressed as follows: Claim Rejections Under 35 U.S.C. 112: Applicant’s amendments to the claims filed on 09/30/2025 have overcome the 35 U.S.C. 112(f)(a)(b) rejections previously set forth in the Non-Final Office Action mailed 07/02/2025. Claim Rejections Under 35 U.S.C. 103: Applicant’s arguments (see Arguments/Remarks, filed 09/30/2025) with respect to claim rejections under 35 U.S.C. 103 have been fully considered but, respectfully, are not persuasive. Regarding the Applicant’s arguments that “amended independent claims 1 and 18-20 recite novel features not taught or rendered obvious by the applied references.”, “the cited references fail to teach or suggest "circuitry configured to: recognize a virtual marker at a position on a map, wherein an information processing terminal was used to install the virtual marker on the map, and a pattern of the virtual marker is represented by a symbol or a mark; plan an action of the autonomous mobile body not to enter a predetermined area based on the virtual marker," as recited in Applicant's claim 1.”, “Paragraph [0129] of Mizukami merely describes that a recognition unit 120 may perform marker recognition. Thus, Applicant respectfully submits that Mizukami fails to teach or suggest circuitry configured to recognize a virtual marker at a position on a map, wherein the virtual marker was installed on the map using an information processing terminal, and a pattern of the virtual marker is represented by a symbol or a mark and plan an action of the autonomous mobile body not to enter a predetermined area based on the virtual marker, as recited in Applicant's claim 1.”, and “independent claims 1 and 18-20 (and all claims depending thereon) patentably distinguish over Mizukami. Further, Applicant respectfully submits that Ebrahimi, Sakamoto, Havashi, Higaki, and Yamato fail to cure the above-noted deficiencies of Mizukami.”, the Examiner, respectfully, disagrees. Ebrahimi discloses circuitry configured to: recognize a virtual marker at a position on a map (see at least Ebrahimi [¶0282] “The robot may, for example, use the map to autonomously navigate the environment during operation, e.g., accessing the map to determine that a candidate route is blocked by an obstacle denoted in the map, [] the present techniques may also be used for plane finding in augmented reality, barrier detection in virtual reality applications”), wherein an information processing terminal was used to install the virtual marker on the map (see at least Ebrahimi [¶0567], “For example, via such an interface, the user may extend the boundaries of the map [] in areas where the actual boundaries are further than those identified by sensors of the robot, trim boundaries where sensors identified boundaries further than the actual boundaries, or adjusts the location of doorways. Or the user may create virtual boundaries”); plan an action of the autonomous mobile body not to enter a predetermined area based on the virtual marker (see at least Ebrahimi [¶0567], “the user may create virtual boundaries that segment a room for different treatment or across which the robot will not traverse. In some cases where the processor creates an accurate map of the environment, the user may adjust the map boundaries to keep the robot from entering some areas.”). MIZUKAMI discloses a pattern of the virtual marker is represented by a symbol, or a mark (see at least MIZUKAMI [¶0129], “the recognition unit 120 may perform human identification, recognition of facial expression and line-of-sight, object recognition, color recognition, shape recognition, marker recognition, obstacle recognition, step recognition, brightness recognition, and the like.”, that means two-dimensional and three-dimensional images or patterns, i.e., symbols or marks, represent the virtual marker.). Therefore, the combination of Ebrahimi and MIZUKAMI disclose all elements of the amended claims. See Claim Rejections - 35 USC § 103 section below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 5, 11, 15, 16, and 18 – 21 are rejected under 35 U.S.C. 103 as being unpatentable over US 20200346341 MIZUKAMI et al. (MIZUKAMI hereafter) in view of US 20200225673 Ebrahimi et. al. (Ebrahimi hereafter). Regarding Claim 1, MIZUKAMI discloses An autonomous mobile body that autonomously operates, the autonomous mobile body comprising: a driving unit to control at least one actuator to control movement of the autonomous mobile body (see at least MIZUKAMI [¶0007, 0139], “The operation control unit generates [] control sequence data for causing a driving unit of an autonomous mobile body to execute an autonomous movement”, “The driving unit 160 has a function of bending and stretching a plurality of a joint part included in the autonomous mobile body 10 on the basis of control by the operation control unit 150. More specifically, the driving unit 160 drives the actuator 570 included in each joint part on the basis of control by the operation control unit 150.”); and a pattern of the virtual marker is represented by a symbol or a mark (see at least MIZUKAMI [¶0129], “the recognition unit 120 may perform human identification, recognition of facial expression and line-of-sight, object recognition, color recognition, shape recognition, marker recognition, obstacle recognition, step recognition, brightness recognition, and the like.”, that means two-dimensional and three-dimensional images or patterns, i.e., symbols or marks, represent the virtual marker.). and control operation of the driving unit to control motion of the autonomous mobile body to perform the planned action (see at least MIZUKAMI [¶0007, 0135, 0137], “The operation control unit generates [] control sequence data for causing a driving unit of an autonomous mobile body to execute an autonomous movement”, “The action planning unit 140 has a function of planning an action to be performed by the autonomous mobile body 10”, “The operation control unit 150 has a function of controlling operations of the driving unit 160 and the output unit 170 on the basis of an action plan by the action planning unit 140.”). MIZUKAMI does not explicitly disclose circuitry configured to: recognize a virtual marker at a position on a map, wherein an information processing terminal was used to install the virtual marker on the map; plan an action of the autonomous mobile body not to enter a predetermined area based on the virtual marker; However, Ebrahimi is directed towards obstacle recognition method for autonomous robots and discloses circuitry configured to: recognize a virtual marker at a position on a map (see at least Ebrahimi [¶0282] “The robot may, for example, use the map to autonomously navigate the environment during operation, e.g., accessing the map to determine that a candidate route is blocked by an obstacle denoted in the map, [] the present techniques may also be used for plane finding in augmented reality, barrier detection in virtual reality applications”), wherein an information processing terminal was used to install the virtual marker on the map (see at least Ebrahimi [¶0567], “For example, via such an interface, the user may extend the boundaries of the map [] in areas where the actual boundaries are further than those identified by sensors of the robot, trim boundaries where sensors identified boundaries further than the actual boundaries, or adjusts the location of doorways. Or the user may create virtual boundaries”); plan an action of the autonomous mobile body not to enter a predetermined area based on the virtual marker (see at least Ebrahimi [¶0567], “the user may create virtual boundaries that segment a room for different treatment or across which the robot will not traverse. In some cases where the processor creates an accurate map of the environment, the user may adjust the map boundaries to keep the robot from entering some areas.”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have considered the teachings of Sakamoto to modify MIZUKAMI and Ebrahimi in combination, with a reasonable expectation of success, to implement the technique of disclose recognizing a virtual marker at a position on a map, wherein an information processing terminal was used to install the virtual marker on the map and planning an action of an autonomous mobile body not to enter a predetermined area based on the virtual marker for the purpose of providing a wider and more realistic range of actions to the robot thus improving user experience. Regarding Claim 2, MIZUKAMI and Ebrahimi in combination disclose The autonomous mobile body according to claim 1, MIZUKAMI further discloses wherein the circuitry is configured to plan the action of the autonomous mobile body with respect to the virtual marker on a basis of at least one of a use situation of the autonomous mobile body (see at least MIZUKAMI [¶0135], “The action planning unit 140 has a function of planning an action to be performed by the autonomous mobile body 10, on the basis of a situation”), a situation when the virtual marker is recognized (see at least MIZUKAMI [¶0135], “a situation estimated by the recognition unit”), or a use situation of another autonomous mobile body (see at least MIZUKAMI [¶0193], “the operation control unit 150 according to the present embodiment can generate control sequence data in which a relative position with respect to another autonomous mobile body 10 is recorded”). Regarding Claim 5, MIZUKAMI and Ebrahimi in combination disclose The autonomous mobile body according to claim 2, MIZUKAMI further discloses wherein the circuitry is configured to set a desire of the autonomous mobile body on a basis of the situation when the virtual marker is recognized (see at least MIZUKAMI [¶0135], “The action planning unit 140 has a function of planning an action to be performed by the autonomous mobile body 10, on the basis of a situation estimated by the recognition unit”), and plan the action of the autonomous mobile body with respect to the virtual marker on a basis of the desire (See at least MIZUKAMI [¶0077, 0078], “determines and executes an autonomous movement by comprehensively determining desires, emotions, surrounding environments, and the like”). Regarding Claim 11, MIZUKAMI and Ebrahimi in combination disclose The autonomous mobile body according to claim 1, MIZUKAMI further discloses wherein the circuitry is configured to learn an application of the virtual marker (see at least MIZUKAMI [¶0133, 0147, Fig. 8], “[0133] The learning unit 130 has a function of learning an environment (situation) and an action, and an effect of the action on the environment.”), and plan the action of the autonomous mobile body on a basis of the application learned of the virtual marker (see at least MIZUKAMI [¶0135, 0291], “[0135] planning an action to be performed by the autonomous mobile body 10, on the basis of a situation estimated by the recognition unit 120 and knowledge learned by the learning unit 130.”, “[0291] the action recommendation unit 220 according to the present embodiment uses the situation summary received from the recognition unit 120 and knowledge as collective intelligence that a learning unit 210 has regarding the plurality of autonomous mobile bodies 10, to determine a recommended action and present information regarding the recommended action to the action planning unit 140.”). Regarding Claim 15, MIZUKAMI and Ebrahimi in combination disclose The autonomous mobile body according to claim 1, MIZUKAMI further discloses wherein the circuitry is configured to identify another autonomous mobile body on a basis of whether or not the virtual marker is attached or a type of the virtual marker (see at least MIZUKAMI [¶0129], “The recognition unit 120 has a function of performing various kinds of recognition related to the user, a surrounding environment, and a state of the autonomous mobile body 10, on the basis of various kinds of information collected by the input unit 110. As an example, the recognition unit 120 may perform human identification, recognition of facial expression and line-of-sight, object recognition, color recognition, shape recognition, marker recognition, obstacle recognition, step recognition, brightness recognition, and the like.”, that means two-dimensional and three-dimensional images or patterns, i.e., symbols or marks, represent the virtual marker.), and plan the action of the autonomous mobile body on a basis of an identification result of the another autonomous mobile body (see at least MIZUKAMI [¶0193], “the operation control unit 150 according to the present embodiment can generate control sequence data in which a relative position with respect to another autonomous mobile body 10 is recorded”). Regarding Claim 16, MIZUKAMI and Ebrahimi in combination disclose The autonomous mobile body according to claim 1, MIZUKAMI further discloses wherein the virtual marker represents a predetermined two-dimensional or three-dimensional pattern (see at least MIZUKAMI [¶0129], “the recognition unit 120 may perform human identification, recognition of facial expression and line-of-sight, object recognition, color recognition, shape recognition, marker recognition, obstacle recognition, step recognition, brightness recognition, and the like.”, that means two-dimensional and three-dimensional images or patterns, i.e., symbols or marks, represent the virtual marker.). Regarding Claim 18, MIZUKAMI discloses An information processing apparatus (see at least MIZUKAMI [¶0004, 0073, 0118, Claim 1], “[0004] an information processing apparatus ”) comprising: a pattern of the virtual marker is represented by a symbol or a mark (see at least MIZUKAMI [¶0129], “the recognition unit 120 may perform human identification, recognition of facial expression and line-of-sight, object recognition, color recognition, shape recognition, marker recognition, obstacle recognition, step recognition, brightness recognition, and the like.”, that means two-dimensional and three-dimensional images or patterns, i.e., symbols or marks, represent the virtual marker.). and control operation of a driving unit to control motion of the autonomous mobile body to perform the planned action (see at least MIZUKAMI [¶0007, 0135, 0137], “The operation control unit generates [] control sequence data for causing a driving unit of an autonomous mobile body to execute an autonomous movement”, “The action planning unit 140 has a function of planning an action to be performed by the autonomous mobile body 10”, “The operation control unit 150 has a function of controlling operations of the driving unit 160 and the output unit 170 on the basis of an action plan by the action planning unit 140.”). MIZUKAMI does not explicitly disclose circuitry configured to: recognize a virtual marker at a position on a map, wherein an information processing terminal was used to install the virtual marker on the map; plan an action of the autonomous mobile body, that includes circuitry, not to enter a predetermined area based on the virtual marker; However, Ebrahimi is directed towards obstacle recognition method for autonomous robots and discloses circuitry configured to: recognize a virtual marker at a position on a map (see at least Ebrahimi [¶0282] “The robot may, for example, use the map to autonomously navigate the environment during operation, e.g., accessing the map to determine that a candidate route is blocked by an obstacle denoted in the map, [] the present techniques may also be used for plane finding in augmented reality, barrier detection in virtual reality applications”), wherein an information processing terminal was used to install the virtual marker on the map (see at least Ebrahimi [¶0567], “For example, via such an interface, the user may extend the boundaries of the map [] in areas where the actual boundaries are further than those identified by sensors of the robot, trim boundaries where sensors identified boundaries further than the actual boundaries, or adjusts the location of doorways. Or the user may create virtual boundaries”); plan an action of the autonomous mobile body, that includes circuitry, not to enter a predetermined area based on the virtual marker (see at least Ebrahimi [¶0567], “the user may create virtual boundaries that segment a room for different treatment or across which the robot will not traverse. In some cases where the processor creates an accurate map of the environment, the user may adjust the map boundaries to keep the robot from entering some areas.”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have considered the teachings of Sakamoto to modify MIZUKAMI and Ebrahimi in combination, with a reasonable expectation of success, to implement the technique of disclose recognizing a virtual marker at a position on a map, wherein an information processing terminal was used to install the virtual marker on the map and planning an action of an autonomous mobile body not to enter a predetermined area based on the virtual marker for the purpose of providing a wider and more realistic range of actions to the robot thus improving user experience. Regarding Claim 19, MIZUKAMI discloses An information processing method (see at least MIZUKAMI [¶0004, 0006, Claim 19], ”[0004] an information processing method”) comprising: a pattern of the virtual marker is represented by a symbol or a mark (see at least MIZUKAMI [¶0129], “the recognition unit 120 may perform human identification, recognition of facial expression and line-of-sight, object recognition, color recognition, shape recognition, marker recognition, obstacle recognition, step recognition, brightness recognition, and the like.”, that means two-dimensional and three-dimensional images or patterns, i.e., symbols or marks, represent the virtual marker.). controlling operation of a driving unit of the autonomous mobile body to control motion of the autonomous mobile body to perform the planned action (see at least MIZUKAMI [¶0007, 0135, 0137], “The operation control unit generates [] control sequence data for causing a driving unit of an autonomous mobile body to execute an autonomous movement”, “The action planning unit 140 has a function of planning an action to be performed by the autonomous mobile body 10”, “The operation control unit 150 has a function of controlling operations of the driving unit 160 and the output unit 170 on the basis of an action plan by the action planning unit 140.”). MIZUKAMI does not explicitly disclose recognizing a virtual marker at a position on a map, wherein an information processing terminal was used to install the virtual marker on the map; planning an action of the autonomous mobile body, that includes circuitry, not to enter a predetermined area based on the virtual marker; However, Ebrahimi is directed towards obstacle recognition method for autonomous robots and discloses recognizing a virtual marker at a position on a map (see at least Ebrahimi [¶0282] “The robot may, for example, use the map to autonomously navigate the environment during operation, e.g., accessing the map to determine that a candidate route is blocked by an obstacle denoted in the map, [] the present techniques may also be used for plane finding in augmented reality, barrier detection in virtual reality applications”), wherein an information processing terminal was used to install the virtual marker on the map (see at least Ebrahimi [¶0567], “For example, via such an interface, the user may extend the boundaries of the map [] in areas where the actual boundaries are further than those identified by sensors of the robot, trim boundaries where sensors identified boundaries further than the actual boundaries, or adjusts the location of doorways. Or the user may create virtual boundaries”); planning an action of the autonomous mobile body, that includes circuitry, not to enter a predetermined area based on the virtual marker (see at least Ebrahimi [¶0567], “the user may create virtual boundaries that segment a room for different treatment or across which the robot will not traverse. In some cases where the processor creates an accurate map of the environment, the user may adjust the map boundaries to keep the robot from entering some areas.”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have considered the teachings of Sakamoto to modify MIZUKAMI and Ebrahimi in combination, with a reasonable expectation of success, to implement the technique of disclose recognizing a virtual marker at a position on a map, wherein an information processing terminal was used to install the virtual marker on the map and planning an action of an autonomous mobile body not to enter a predetermined area based on the virtual marker for the purpose of providing a wider and more realistic range of actions to the robot thus improving user experience. Regarding Claim 20, (Currently Amended), MIZUKAMI discloses A non-transitory computer-readable medium storing executable instructions, which when executed by circuitry, cause the circuitry to perform a method (see at least MIZUKAMI [¶0330, 0332], “The CPU 871 functions as, for example, an arithmetic processing device or a control device, and controls the all of or a part of an operation of each component on the basis of various programs recorded in the ROM 872, RAM 873, the storage 880, or a removable recording medium 901.”, “The ROM 872 is means that stores a program to be read by the CPU 871, data to be used for calculation, and the like.”), the method comprising: controlling operation of a driving unit of the autonomous mobile body to control motion of the autonomous mobile body to perform the planned action (see at least MIZUKAMI [¶0007, 0135, 0137], “The operation control unit generates [] control sequence data for causing a driving unit of an autonomous mobile body to execute an autonomous movement”, “The action planning unit 140 has a function of planning an action to be performed by the autonomous mobile body 10”, “The operation control unit 150 has a function of controlling operations of the driving unit 160 and the output unit 170 on the basis of an action plan by the action planning unit 140.”). MIZUKAMI does not explicitly disclose recognizing a virtual marker at a position on a map, wherein an information processing terminal was used to install the virtual marker on the map; planning an action of an autonomous mobile body, that includes circuitry, not to enter a predetermined area based on the virtual marker; However, Ebrahimi is directed towards obstacle recognition method for autonomous robots and discloses recognizing a virtual marker at a position on a map (see at least Ebrahimi [¶0282] “The robot may, for example, use the map to autonomously navigate the environment during operation, e.g., accessing the map to determine that a candidate route is blocked by an obstacle denoted in the map, [] the present techniques may also be used for plane finding in augmented reality, barrier detection in virtual reality applications”), wherein an information processing terminal was used to install the virtual marker on the map (see at least Ebrahimi [¶0567], “For example, via such an interface, the user may extend the boundaries of the map [] in areas where the actual boundaries are further than those identified by sensors of the robot, trim boundaries where sensors identified boundaries further than the actual boundaries, or adjusts the location of doorways. Or the user may create virtual boundaries”); planning an action of an autonomous mobile body, that includes circuitry, not to enter a predetermined area based on the virtual marker (see at least Ebrahimi [¶0567], “the user may create virtual boundaries that segment a room for different treatment or across which the robot will not traverse. In some cases where the processor creates an accurate map of the environment, the user may adjust the map boundaries to keep the robot from entering some areas.”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have considered the teachings of Sakamoto to modify MIZUKAMI and Ebrahimi in combination, with a reasonable expectation of success, to implement the technique of disclose recognizing a virtual marker at a position on a map, wherein an information processing terminal was used to install the virtual marker on the map and planning an action of an autonomous mobile body not to enter a predetermined area based on the virtual marker, for the purpose of providing a wider and more realistic range of actions to the robot thus improving user experience. Regarding Claim 21, (New), MIZUKAMI and Ebrahimi in combination disclose The autonomous mobile body according to claim 1 but do not explicitly disclose wherein the map information is map information indicating a layout of a house. However, Ebrahimi discloses wherein the map information is map information indicating a layout of a house (see at least Ebrahimi [¶0351], “In some embodiments, the processor of the robot may use the map (e.g., locations of rooms, layout of areas, etc.) to determine efficient coverage of the environment”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have considered the teachings of Sakamoto to modify MIZUKAMI and Ebrahimi in combination, with a reasonable expectation of success, to implement the technique of map information indicating a layout of a house, for the purpose of providing environment information to the robot to allow for a wider and more realistic range of actions to the robot thus improving user experience. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over US 20200346341 MIZUKAMI et al. (MIZUKAMI hereafter) in view of US 20200225673 Ebrahimi et. al. (Ebrahimi hereafter), and further in view of US 20180085928 Yamato, (Yamato hereafter). Regarding Claim 13, MIZUKAMI and Ebrahimi in combination disclose The autonomous mobile body according to claim 1 but do not explicitly disclose wherein the circuitry is configured to plan the action of the autonomous mobile body on a basis of an application of the virtual marker that changes depending on a version of software installed in the autonomous mobile body. However, Yamato is directed towards a Robot, Robot Control Method, and Robot System, and discloses wherein the circuitry is configured to plan the action of the autonomous mobile body on a basis of an application of the virtual marker that changes depending on a version of software installed in the autonomous mobile body (see at least Yamato [¶0050], “The robot 100 receives a message transmitted from the mobile terminal 400 of the user via the network 500. The robot selects an application based on instruction information included in the received message and operates based on the selected application. Since the application can be downloaded and installed in the mobile terminal 400 of the user from the application market server 200 in response to a request of the user, it is possible to realize a versatile operation of the robot.”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have considered the teachings of Yamato to modify MIZUKAMI, with a reasonable expectation of success, to use the technique of planning the action of the autonomous mobile body on a basis of an application of the virtual marker that changes depending on a version of software installed in the autonomous mobile body, as taught by Yamato, providing a selection of applications or software versions that add expanded functionality to the autonomous mobile body. Claims 3 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over US 20200346341 MIZUKAMI et al. (MIZUKAMI hereafter) in view of US 20200225673 Ebrahimi et. al. (Ebrahimi hereafter), and further in view of US 20030078696 Sakamoto et al. (Sakamoto hereafter). Regarding Claim 3, MIZUKAMI and Ebrahimi in combination disclose The autonomous mobile body according to claim 2, but do not explicitly disclose wherein the circuitry is configured to set a growth degree of the autonomous mobile body on a basis of the use situation of the autonomous mobile body, and plan the action of the autonomous mobile body with respect to the virtual marker on a basis of the growth degree. However, Sakamoto is directed towards a Robot System and a Robot Device and discloses wherein the circuitry is configured to set a growth degree of the autonomous mobile body on a basis of the use situation of the autonomous mobile body (see at least Sakamoto [¶0223, 0225], “the controller 122 always monitors and counts generation of a plurality of predetermined factors related to "growth" (hereinafter referred to as growth factors) such as strengthening learning consisting of order inputs”, “this pet robot system 120 are four "growth steps" of "baby period," "child period," "young period" and "adult period."), and plan the action of the autonomous mobile body with respect to the virtual marker on a basis of the growth degree (see at least Sakamoto [¶0223, 0228], “a pet robot 121 has a function of changing motions and actions as if the real animal "grew"”, “Each time the total experience value of the growth factors exceeds each of a threshold value predetermined for each "young period" and "adult period," the controller 122 similarly modifies the action and motion models”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have considered the teachings of Sakamoto to modify MIZUKAMI and Ebrahimi in combination, with a reasonable expectation of success, to implement the technique of circuitry configured to set a growth degree of the autonomous mobile body on a basis of the use situation, and plan the action of the autonomous mobile body with respect to the virtual marker on a basis of the growth degree, for the purpose of simulating the aging and growing process of a pet robot by planning actions on the basis of the growth degree. Doing so would provide a more realistic and fun experience to the users. Regarding Claim 4, MIZUKAMI and Ebrahimi in combination disclose The autonomous mobile body according to claim 3, but do not explicitly disclose wherein the circuitry is configured to control a success rate of the action with respect to the virtual marker on the basis of the growth degree. However, Sakamoto discloses wherein the circuitry is configured to control a success rate of the action with respect to the virtual marker on the basis of the growth degree (see at least Sakamoto [¶0223, 0228, 0240], “a pet robot 121 has a function of changing motions and actions as if the real animal "grew"”, “Each time the total experience value of the growth factors exceeds each of a threshold value predetermined for each "young period" and "adult period," the controller 122 similarly modifies the action and motion models”, “an action generating mechanism section 133 which allows the pet robot 121 to actually manifest actions on the basis of a result determined by the action determining mechanism section 132 and a growth step control mechanism section 133 which controls the "growth steps" of the pet robot 121.”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have considered the teachings of Sakamoto to modify MIZUKAMI and Ebrahimi in combination, with a reasonable expectation of success, to use the technique of circuitry configured to control a success rate of the action with respect to the virtual marker on the basis of the growth degree, as taught by Sakamoto, for the purpose of providing a wider and more realistic range of actions to the robot thus improving customer experience. Claims 6, 7, 9, 10, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over US 20200346341 MIZUKAMI et al., in view of US 20200225673 Ebrahimi et. al. (Ebrahimi hereafter), and further in view of US 20190126157 HAYASHI (HAYASHI hereafter). Regarding Claim 6, MIZUKAMI and Ebrahimi in combination disclose The autonomous mobile body according to claim 5, but do not explicitly disclose wherein the circuitry is configured to plan the action of the autonomous mobile body so as to perform a motion based on the desire within a predetermined region based on the marker. However, HAYASHI is directed towards an Autonomously Acting Robot and discloses wherein the circuitry is configured to plan the action of the autonomous mobile body so as to perform a motion based on the desire within a predetermined region based on the marker (see at least HAYASHI [¶0067, 0068, Fig. 7], “The emotion map 116 expresses emotional swings as an internal state of the robot 100. The robot 100 heads for a favored point, avoids a disliked point, stays for a while at the favored point, and in time performs the next action.”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have considered the teachings of HAYASHI to modify MIZUKAMI and Ebrahimi in combination, with a reasonable expectation of success, to use the technique of planning the action of the autonomous mobile body so as to perform a motion based on the desire within a predetermined region based on the marker, as taught by HAYASHI, for the purpose of more closely simulating the behavior of a living creature like a pet for the enjoyment of the user. Regarding Claim 7, MIZUKAMI and Ebrahimi in combination disclose The autonomous mobile body according to claim 5, but do not explicitly disclose wherein the desire includes at least one of a desire to be close to a person, a desire to play with an object, a desire to move a body, a desire to express an emotion, an excretion desire, or a desire to sleep . However, HAYASHI is directed towards an Autonomously Acting Robot and discloses wherein the desire includes at least one of a desire to be close to a person, a desire to play with an object, a desire to move a body, a desire to express an emotion, an excretion desire, or a desire to sleep (see at least HAYASHI [¶0068, 0102], ”[0068] various action maps such as curiosity, a desire to avoid fear, a desire to seek security, and a desire to seek physical ease such as quietude, low light, coolness, or warmth, can be defined.”, “[0102] The action determining unit 140 can also execute a gesture of holding up both arms 106 as a gesture asking for “a hug”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have considered the teachings of HAYASHI to modify MIZUKAMI and Ebrahimi in combination, with a reasonable expectation of success, to define a plurality of desire parameters to be used by the action planning unit, as taught by HAYASHI, for the purpose of more closely simulating the behavior of a living creature like a pet for the enjoyment of the user. Regarding Claim 9, MIZUKAMI and Ebrahimi in combination disclose The autonomous mobile body according to claim 2, MIZUKAMI further discloses wherein the circuitry is configured to set a preference for the virtual marker on a basis of at least one of the use situation of the autonomous mobile body (see at least MIZUKAMI [¶0135], “planning an action to be performed by the autonomous mobile body 10, on the basis of a situation”) or the use situation of the another autonomous mobile body (see at least MIZUKAMI [¶0193], “the operation control unit 150 according to the present embodiment can generate control sequence data in which a relative position with respect to another autonomous mobile body 10 is recorded”), MIZUKAMI does not explicitly disclose and plan the action of the autonomous mobile body with respect to the virtual marker on a basis of the preference. However, HAYASHI is directed towards an Autonomously Acting Robot and discloses plan the action of the autonomous mobile body with respect to the virtual marker on a basis of the preference (see at least HAYASHI [¶0099, 0133], “[0099] the robot 100 can also carry out an action in accordance with familiarity.”, “[0133] the robot 100 approaches the user when finding a user with high familiarity, and conversely, moves away from the user when finding a user with low familiarity.”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have considered the teachings of HAYASHI to modify MIZUKAMI and Ebrahimi in combination, with a reasonable expectation of success, to use the technique of planning the action of the autonomous mobile body with respect to the virtual marker on a basis of the preference, as taught by HAYASHI, for the purpose of providing a wider and more realistic range of actions to the robot thus improving customer experience. Regarding Claim 10, MIZUKAMI, Ebrahimi and HAYASHI in combination disclose The autonomous mobile body according to claim 9, but do not explicitly disclose wherein the circuitry is configured to plan the action of the autonomous mobile body so as not to approach the virtual marker in a case where the preference is less than a predetermined threshold value. HAYASHI further discloses wherein the circuitry is configured to plan the action of the autonomous mobile body so as not to approach the virtual marker in a case where the preference is less than a predetermined threshold value (see at least HAYASHI [¶0133], “the robot 100 approaches the user when finding a user with high familiarity, and conversely, moves away from the user when finding a user with low familiarity.”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have considered the teachings of HAYASHI to modify MIZUKAMI and Ebrahimi in combination, with a reasonable expectation of success, to use the technique of not to approach the virtual marker in a case where the preference is less than a predetermined threshold value, as taught by HAYASHI, for the purpose of providing a wider and more realistic range of actions to the robot thus improving customer experience. Regarding Claim 14, MIZUKAMI and Ebrahimi in combination disclose The autonomous mobile body according to claim 1, but do not explicitly disclose wherein the circuitry is configured to identify a person on a basis of whether or not the virtual marker is attached or a type of the virtual marker, and plan the action of the autonomous mobile body on a basis of an identification result of the person. However, HAYASHI is directed towards an Autonomously Acting Robot and discloses wherein the circuitry is configured to identify a person on a basis of whether or not the virtual marker is attached or a type of the virtual marker (see at least HAYASHI [¶0082, 0088], “[0082] The robot 100 identifies a user based on the user's physical characteristics or behavioral characteristics. The robot 100 constantly films a periphery using the incorporated camera. Further, the robot 100 extracts the physical characteristics and behavioral characteristics of a person appearing in an image. The physical characteristics may be visual characteristics inherent to a body”, “[0088] The recognizing unit 212 further includes a person recognizing unit 214 and a response recognizing unit 228. The person recognizing unit 214 recognizes a person from an image filmed by the camera incorporated in the robot 100, and extracts the physical characteristics and behavioral characteristics of the person.”), and plan the action of the autonomous mobile body on a basis of an identification result of the person (see at least HAYASHI [¶0084, 0099], “[0084] The robot 100 has a familiarity internal parameter for each user.”, “[0099] the robot 100 can also carry out an action in accordance with familiarity.”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have considered the teachings of HAYASHI to modify MIZUKAMI and Ebrahimi in combination, with a reasonable expectation of success, to use the technique of unit identifying a person and planning the action of the autonomous mobile body a basis of an identification result of the person, as taught by HAYASHI, for the purpose performing different actions based on each user. This simulates the behavior of a living creature like a dog or a cat. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over US 20200346341 MIZUKAMI et al., in view of US 20200225673 Ebrahimi et. al. (Ebrahimi hereafter), in view of US 20190126157 HAYASHI (HAYASHI hereafter), and further in view of JP 2003089077 HIGAKI et al. (HIGAKI hereafter). Regarding Claim 8, MIZUKAMI, Ebrahimi and HAYASHI in combination disclose The autonomous mobile body according to claim 7, but do not explicitly disclose wherein in a case where a degree of the excretion desire is equal to or greater than a predetermined threshold value, the circuitry plans the action of the autonomous mobile body so as to perform a motion simulating an excretion action within a predetermined region based on the virtual marker. However, HIGAKI is directed towards a Robot and discloses wherein in a case where a degree of the excretion desire is equal to or greater than a predetermined threshold value, the circuitry plans the action of the autonomous mobile body so as to perform a motion simulating an excretion action within a predetermined region based on the virtual marker (See at least HIGAKI English translation [¶0022, 0091], “by making the robot perform a motion requesting excretion when the robot transitions to a reduced power generation state, the robot can be made to express the emotion of "suffering.", “the robot 100 can express the emotion of "excretion" by assuming a distressed crouching posture.”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have considered the teachings of HIGAKI to modify MIZUKAMI, Ebrahimi and HAYASHI in combination, with a reasonable expectation of success, to use the technique of planning the action of the autonomous mobile body so as to perform a motion simulating an excretion action when a degree of the excretion desire is equal to or greater than a predetermined threshold value. This allows owners to have a simulated experience equivalent to when their real pet, such as a dog or cat, needs to defecate and then regains its energy by taking care of it. Through this simulated experience, children can develop a greater sense of love and compassion for robots and learn the value of life, as taught by HIGAKI. Conclusion Examiner notes that the fundamentals of the rejection are based on the broadest reasonable interpretation of the claim language. Any reference to specific figures, column, line and paragraphs should not be considered limiting in any way, the entire cited reference, as well as any secondary teaching reference(s), are considered to provide relevant disclosure relating to the claimed invention. Applicant is kindly invited to consider the reference as a whole. References are to be interpreted as by one of ordinary skill in the art rather than as by a novice. See MPEP 2141. Therefore, the relevant inquiry when interpreting a reference is not what the reference expressly discloses on its face but what the reference would teach or suggest to one of ordinary skill in the art. Examiner encourages Applicant to fill out and submit form PTO-SB-439 to allow internet communications in accordance with 37 CFR 1.33 (MPEP 502.03). Should the need arise to perfect applicant-proposed or examiner’s amendments, authorization for e-mail correspondence would have already been authorized and would save time. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Neit J. Nieves Flores whose telephone number is (703)756-5864. The examiner can normally be reached M-F 0930-1800 AST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rachid Bendidi can be reached at (571) 272-4896. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Neit J. Nieves Flores/ Patent Examiner Art Unit 3664 /RACHID BENDIDI/Supervisory Patent Examiner, Art Unit 3664
Read full office action

Prosecution Timeline

May 17, 2023
Application Filed
Dec 20, 2024
Non-Final Rejection — §103
Mar 18, 2025
Response Filed
Jun 27, 2025
Final Rejection — §103
Aug 18, 2025
Response after Non-Final Action
Sep 30, 2025
Request for Continued Examination
Oct 13, 2025
Response after Non-Final Action
Feb 28, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12517523
System and Method for Controlling Motion of a Vehicle in a Stochastic Disturbance Field
2y 5m to grant Granted Jan 06, 2026
Patent 12479292
TEMPORARY TORQUE CONTROL SYSTEM
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
43%
Grant Probability
99%
With Interview (+80.0%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month