Prosecution Insights
Last updated: April 19, 2026
Application No. 19/322,341

MOBILE BODY, METHOD OF CONTROLLING MOBILE BODY, AND PROGRAM

Non-Final OA §103
Filed
Sep 08, 2025
Examiner
ALHARBI, ADAM MOHAMED
Art Unit
3663
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Sony Group Corporation
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
91%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
554 granted / 630 resolved
+35.9% vs TC avg
Minimal +3% lift
Without
With
+2.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
33 currently pending
Career history
663
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
58.6%
+18.6% vs TC avg
§102
22.0%
-18.0% vs TC avg
§112
5.5%
-34.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 630 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office Action is in response to the application filed on 09/08/2025. Claims 1-6 are presently pending and are presented for examination. Title The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to ATA 35 U.S.C. 102 and 103 is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: Determining the scope and contents of the prior art. Ascertaining the differences between the prior art and the claims at issue. Resolving the level of ordinary skill in the pertinent art. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1 and 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 20060129276 (hereinafter, "Watabe") in view of U.S. Pub. No. 20200097006 (hereinafter, "Liu"). Regarding claim 1, Watabe discloses a mobile apparatus comprising: circuitry configured to estimate a self-position of the mobile apparatus based on a parameter by using an environmental map embedded with waypoint information and a parameter for self-position estimation (“The map data storage unit 81 stores map data on the active area where the robot R moves around, and it may be a random access memory (RAM), read only memory (ROM) or hard disk. The map data contains position data and mark-formed region data. The position data indicates where the individual marks M are placed on the active area, while the mark-formed region data indicates data generated by adding a predetermined width to the position data. The map data storage unit 81 outputs the stored map data to the switch determination unit 82 and self-location calculation unit 85” (para 0062)), acquire the parameter for self-position estimation from the environmental map at a location corresponding to the waypoint information (para 0062), and However, Watabe does not explicitly teach dynamically switch the parameter for self-position estimation in accordance with a travelling environment during operation. Liu, in the same field of endeavor, teaches dynamically switch the parameter for self-position estimation in accordance with a travelling environment during operation (“the processor of the robotic device may employ localization and mapping techniques, such as simultaneous localization and mapping (SLAM), using sensor data that is weighted based on its reliability to plan a navigation path within the environment. In various embodiments, the processor of the robotic device may extract semantic information about situations that negatively affect performance and/or accuracy of one or more sensor used for localization. Examples of such situations maybe include poor lighting conditions or lack of objects in the environment, which may impact computer vision-based sensors. Further examples may include a particular flooring material or a degree of incline” (para 0019)). One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Watabe with the teachings of Liu in order to improve navigation by identifying and adapting to temporal and spatial patterns in its surroundings; see Liu at least at [0019]. Regarding claim 5, Watabe discloses a control method for a mobile apparatus, comprising: estimating a self-position of the mobile apparatus based on a parameter by utilizing an environmental map embedded with waypoint information and a parameter for self-position estimation (“The map data storage unit 81 stores map data on the active area where the robot R moves around, and it may be a random access memory (RAM), read only memory (ROM) or hard disk. The map data contains position data and mark-formed region data. The position data indicates where the individual marks M are placed on the active area, while the mark-formed region data indicates data generated by adding a predetermined width to the position data. The map data storage unit 81 outputs the stored map data to the switch determination unit 82 and self-location calculation unit 85” (para 0062)); obtaining the parameter from the environmental map at locations corresponding to the waypoint information (para 0062); and However, Watabe does not explicitly teach dynamically switching the parameter for self-position estimation in accordance with a travelling environment during operation. Liu, in the same field of endeavor, teaches dynamically switching the parameter for self-position estimation in accordance with a travelling environment during operation (“the processor of the robotic device may employ localization and mapping techniques, such as simultaneous localization and mapping (SLAM), using sensor data that is weighted based on its reliability to plan a navigation path within the environment. In various embodiments, the processor of the robotic device may extract semantic information about situations that negatively affect performance and/or accuracy of one or more sensor used for localization. Examples of such situations maybe include poor lighting conditions or lack of objects in the environment, which may impact computer vision-based sensors. Further examples may include a particular flooring material or a degree of incline” (para 0019)). One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Watabe with the teachings of Liu in order to improve navigation by identifying and adapting to temporal and spatial patterns in its surroundings; see Liu at least at [0019]. Regarding claim 6, Watabe discloses a non-transitory computer-readable storage medium having embodied thereon a program, which when executed by a computer causes the computer to execute a control method for a mobile apparatus, the control method comprising: estimating a self-position of the mobile apparatus based on a parameter by utilizing an environmental map embedded with waypoint information and a parameter for self-position estimation (“The map data storage unit 81 stores map data on the active area where the robot R moves around, and it may be a random access memory (RAM), read only memory (ROM) or hard disk. The map data contains position data and mark-formed region data. The position data indicates where the individual marks M are placed on the active area, while the mark-formed region data indicates data generated by adding a predetermined width to the position data. The map data storage unit 81 outputs the stored map data to the switch determination unit 82 and self-location calculation unit 85” (para 0062)); obtaining the parameter from the environmental map at a location corresponding to the waypoint information (para 0062); and However, Watabe does not explicitly teach dynamically switching the parameter for self-position estimation in accordance with a travelling environment during operation. Liu, in the same field of endeavor, teaches dynamically switching the parameter for self-position estimation in accordance with a travelling environment during operation (“the processor of the robotic device may employ localization and mapping techniques, such as simultaneous localization and mapping (SLAM), using sensor data that is weighted based on its reliability to plan a navigation path within the environment. In various embodiments, the processor of the robotic device may extract semantic information about situations that negatively affect performance and/or accuracy of one or more sensor used for localization. Examples of such situations maybe include poor lighting conditions or lack of objects in the environment, which may impact computer vision-based sensors. Further examples may include a particular flooring material or a degree of incline” (para 0019)). One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Watabe with the teachings of Liu in order to improve navigation by identifying and adapting to temporal and spatial patterns in its surroundings; see Liu at least at [0019]. Claims 2-4 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 20060129276 (hereinafter, "Watabe"), in view of U.S. Pub. No. 20200097006 (hereinafter, "Liu") as applied to claim 1 above, and in further view of U.S. Pat. No. 11199853 (hereinafter, "Afrouzi"). Regarding claim 2, Watabe discloses the mobile apparatus according to claim 1. However, Watabe does not explicitly teach wherein the parameter for self-position estimation corresponds to at least one of a sensor used for self-position estimation, a covariance value in an extended Kalman filter for fusing a plurality of methods for self-position estimation, or a parameter corresponding to a setting value in each respective method. Afrouzi, in the same field of endeavor, teaches wherein the parameter for self-position estimation corresponds to at least one of a sensor used for self-position estimation, a covariance value in an extended Kalman filter for fusing a plurality of methods for self-position estimation, or a parameter corresponding to a setting value in each respective method (“IMU measurements in a multi-channel stream indicative of acceleration along three or six axes may be integrated over time to infer a change in pose of the VMP robot, e.g., with a Kalman filter” (Col. 111, lines 35-38)). One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Watabe with the teachings of Afrouzi in order to integrate measurements over time; see Afrouzi at least at [Col. 111, lines 35-38]. Regarding claim 3, Watabe discloses the mobile apparatus according to claim 2. However, Watabe does not explicitly teach wherein the parameter for self-position estimation corresponds to self-position estimation using at least one of an IMU, wheel odometry, visual odometry, SLAM, or GPS. Liu, in the same field of endeavor, teaches wherein the parameter for self-position estimation corresponds to self-position estimation using at least one of an IMU, wheel odometry, visual odometry, SLAM, or GPS ((Fig. 4, #402) and “As described, the sensor(s) 402 may also include at least one motion feedback sensor, such as a wheel encoder, pressure sensor, or other collision or contact-based sensor. Further, the sensor(s) 402 may include at least one image sensor, such as a visual camera, an infrared sensor, a sonar detector, etc” (para 0070)). One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Watabe with the teachings of Liu in order to enable the system to perform localization, map generation, and path planning for SLAM processes on the robotic device; see Liu at least at [0070]. Regarding claim 4, Watabe discloses the mobile apparatus according to claim 1. However, Watabe does not explicitly teach wherein the circuitry is further configured to monitor a state of a sensor used for self-position estimation, and select a route based on parameters for self-position estimation in different routes and sensor states corresponding to each parameter for self-position estimation. Afrouzi, in the same field of endeavor, teaches wherein the circuitry is further configured to monitor a state of a sensor used for self-position estimation (“For example, actuators may be encouraged to find better sources of information, such as robots with better sensors or ideally positioned sensors, and observers may be encouraged to find actuators that have better use of their information. In some embodiments, the processor uses a regret analysis when determining exploration or exploitation. For example, the processor may determine a regret function” (Col. 179, lines 42-49)), and One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Watabe with the teachings of Afrouzi in order to find better sources of information; see Afrouzi at least at [Col. 179, lines 42-49]. select a route based on parameters for self-position estimation in different routes and sensor states corresponding to each parameter for self-position estimation (“In some embodiments, the control system iterates through different evolved routes until a route with a cost below a predetermined threshold is found or for a predetermined amount of time. In some embodiments, the control system randomly chooses a route with higher cost to avoid getting stuck in a local minimum” (Col. 205, lines 45-50)). One of ordinary skill in the art, before the time of filing, would have been motivated to modify the disclosure of Watabe with the teachings of Afrouzi in order to find a route with a cost below a predetermined threshold; see Afrouzi at least at [Col. 205, lines 45-50]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADAM ALHARBI whose telephone number is (313)446-6621. The examiner can normally be reached on M-F 11:00AM – 7:30PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Flynn can be reached on (571) 272-9855. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ADAM M ALHARBI/Primary Examiner, Art Unit 3663
Read full office action

Prosecution Timeline

Sep 08, 2025
Application Filed
Jan 10, 2026
Non-Final Rejection — §103
Mar 16, 2026
Interview Requested
Mar 31, 2026
Examiner Interview Summary
Mar 31, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12583435
TECHNIQUES FOR MANAGING POWER DISTRIBUTION BETWEEN ELECTRIFIED VEHICLE LOADS AND HIGH VOLTAGE BATTERY SYSTEM DURING LOW STATE OF CHARGE CONDITIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12553731
ARRIVAL PREDICTIONS BASED ON DESTINATION SPECIFIC MODEL
2y 5m to grant Granted Feb 17, 2026
Patent 12548446
COLLISION WARNING SYSTEM AND METHOD FOR A VEHICLE
2y 5m to grant Granted Feb 10, 2026
Patent 12509218
FLIGHT CONTROL FOR AN UNMANNED AERIAL VEHICLE
2y 5m to grant Granted Dec 30, 2025
Patent 12504286
SIMULTANEOUS LOCATION AND MAPPING (SLAM) USING DUAL EVENT CAMERAS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
91%
With Interview (+2.8%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 630 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month