Prosecution Insights
Last updated: April 18, 2026
Application No. 18/479,188

METHOD FOR DETERMINING A SELECTION AREA IN AN ENVIRONMENT FOR A MOBILE DEVICE

Final Rejection §101§103
Filed
Oct 02, 2023
Examiner
ALSOMAIRY, IBRAHIM ABDOALATIF
Art Unit
3667
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Robert Bosch GmbH
OA Round
4 (Final)
40%
Grant Probability
Moderate
5-6
OA Rounds
3y 2m
To Grant
49%
With Interview

Examiner Intelligence

Grants 40% of resolved cases
40%
Career Allow Rate
33 granted / 82 resolved
-11.8% vs TC avg
Moderate +8% lift
Without
With
+8.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
43 currently pending
Career history
125
Total Applications
across all art units

Statute-Specific Performance

§101
14.7%
-25.3% vs TC avg
§103
54.8%
+14.8% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 82 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is a Final Actions on the Merits. Claims 1-15 are currently pending and are addressed below. Response to Amendments The amendment filed on January 13th, 2026 has been considered and entered. Accordingly, claim 13 has been amended. Response to Arguments The applicant states (Amend. 7) that the amendment to claim 13 overcomes the means-plus-function limitation under 35 USC 112(f). The examiner respectfully disagrees. The claim has been amended to include “a driver configured to move the mobile device”. The term “driver” is a generic placeholder that does not recite sufficient structure to perform the recited function. The Applicant states (Amend. 7) that claims 1-15 are “directed to a technical solution to a technical problem in the field of mobile device navigation. Specifically, it improves the operation of robots and other mobile devices by accurately determining selection areas in real-world environments, including areas that are otherwise difficult to detect or define, such as no-go zones or areas to be cleaned. The present invention achieves this by integrating sensor data from multiple sources (e.g., mobile and stationary devices), performing coarse-to-fine localization, and translating the resulting environmental information into actionable instructions for device navigation using a map annotated with data compatible with the sensors. These steps provide a concrete technological improvement, enabling more accurate, efficient, and automated navigation and operation of mobile devices” and as such the claims consist of patent-eligible subject matter. The examiner respectfully disagrees. Amended claim 1 at most discusses an abstract idea to determine a selection are in a mobile device’s environment. Even if, for the sake of the argument, the determination is a new idea, “a claim for a new abstract idea is still an abstract idea.” Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1151 (Fed. Cir. 2016) (emphasis omitted); see also Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016) (“A narrow claim directed to an abstract idea, however, is not necessarily patent-eligible.”). Furthermore, when a claim directed to an abstract idea contains no restriction on how an asserted improvement is accomplished and the asserted improvement is not described in the claim, then the claim does not become patent eligible. See Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1316 (Fed. Cir. 2016). Furthermore, the applicants assert arguments for (1) an improvement to the operation of robots and other mobile devices and (2) a more general improvement to an existing technological process. However, the applicant’s arguments are not persuasive because applicant’s claim 1 fails to recite (1) any limitations detailing “low demand services”, how to efficiently “uninstall and then reinstall a service” or how management of services are allowed to be more efficient, and (2) any limitations detailing how “allowing the service requester to receive a desired quality of service” or how “not experiencing a delay or difference in quality of service even if the requested service had its processing priority lowered and needed to be reconfigured” is achieved. When a claim directed to an abstract idea contains no restriction on how an asserted improvement is accomplished and the asserted improvement is not described in the claim, then the claim does not become patent eligible. See Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1316 (Fed. Cir. 2016); see also MPEP 2106.04(d)(1) (“Second, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. That is, the claim includes the components or steps of the invention that provide the improvement described in the specification”). The applicant states (Amend. 8) that Abeling (US 20220205792 A1) (“Abeling”) fails to disclose the limitation “wherein the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system” as recited in independent claim 1, and similarly in independent claims 12-13 and 15. The examiner respectfully disagrees. Abeling discloses that a layer of the created map includes objects that are able to be detected by the sensor system of a vehicle (See at least Abeling Paragraph 3). The claimed limitation that “the map includes annotations that are compatible with the sensor data, including … environmental features detectable by the sensor system. There is no limiting definition as to what constitutes an “annotation” such that objects represented in the created map that are able to be sensed by the vehicle’s sensor system discloses the above limitation. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “a driver to move” in at least claim 13 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The published specification provides corresponding structure in at least paragraph 37. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-15 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more. In sum, claims 1-20 are rejected under 35 U.S.C. §101 because the claimed invention is directed to a judicial exception to patentability (i.e., a law of nature, a natural phenomenon, or an abstract idea) and do not include an inventive concept that is something “significantly more” than the judicial exception under the January 2019 patentable subject matter eligibility guidance (2019 PEG) analysis which follows. Under the 2019 PEG step 1 analysis, it must first be determined whether the claims are directed to one of the four statutory categories of invention (i.e., process, machine, manufacture, or composition of matter). Applying step 1 of the analysis for patentable subject matter to the claims, it is determined that the claims are directed to the statutory category of a process. Therefore, we proceed to step 2A, Prong 1. Revised Guidance Step 2A – Prong 1 Under the 2019 PEG step 2A, Prong 1 analysis, it must be determined whether the claims recite an abstract idea that falls within one or more designated categories of patent ineligible subject matter (i.e., organizing human activity, mathematical concepts, and mental processes) that amount to a judicial exception to patentability. Here, with respect to independent claims 1, 12-13, and 15 the claims recite the abstract idea of determining an area for a mobile device to move, and mentally determine “determining, based on the sensor data, the position and/or orientation of the entity in a map provided for navigation of the mobile device; providing specification data obtained using the sensor system, the specification data characterizing the selection area; determining the selection area in the map based on the specification data”, where these claims fall within one or more of the three enumerated 2019 PEG categories of patent ineligible subject matter, specifically, a mental process, that can be performed in the human mind since each of the above steps could alternatively be performed in the human mind or with the aid of pen and paper. This conclusion follows from CyberSource Corp. v. Retail Decisions, Inc., where our reviewing court held that section 101 did not embrace a process defined simply as using a computer to perform a series of mental steps that people, aware of each step, can and regularly do perform in their heads. 654 F.3d 1366, 1373 (Fed. Cir. 2011); see also In re Grams, 888 F.2d 835, 840–41 (Fed. Cir. 1989); In re Meyer, 688 F.2d 789, 794–95 (CCPA 1982); Elec. Power Group, LLC v. Alstom S.A., 830 F. 3d 1350, 1354–1354 (Fed. Cir. 2016) (“we have treated analyzing information by steps people go through in their minds, or by mathematical algorithms, without more, as essentially mental processes within the abstract-idea category”). Additionally, mental processes remain unpatentable even when automated to reduce the burden on the user of what once could have been done with pen and paper. See CyberSource, 654 F.3d at 1375 (“That purely mental processes can be unpatentable, even when performed by a computer, was precisely the holding of the Supreme Court in Gottschalk v. Benson.”). These limitations, as drafted, are a simple process that under their broadest reasonable interpretation, covers the performance of the limitations of the mind. For example, the claim limitation encompasses mentally determining an area for a mobile device to travel based off of the information provided by the car’s sensors while traveling, or alternatively, mentally determining an area for a mobile device to travel based on observations by a human. For example, a human could mentally and with the aid of pen and paper determine an area for a mobile device to travel. Revised Guidance Step 2A – Prong 2 Under the 2019 PEG step 2A, Prong 2 analysis, the identified abstract idea to which the claim is directed does not include limitations that integrate the abstract idea into a practical application, since the additional elements of a sensor, control unit, and memory are merely generic components used as a tool (“apply it”) to implement the abstract idea. (See, e.g., MPEP §2106.05(f)). See Alice, 573 U.S. at 223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention.”) In addition, the limitation “providing sensor data obtained using a sensor system in the environment, the sensor data characterizing a position and/or orientation of an entity in the environment” and “wherein determining the position and/or orientation includes a coarse localization based on first sensor data and a fine localization based on further sensor data; wherein the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system” constitutes insignificant presolution activity that merely gathers data and, therefore, do not integrate the exception into a practical application. See In re Bilski, 545 F.3d 943, 963 (Fed. Cir. 2008) (en banc), aff' d on other grounds, 561 U.S. 593 (2010) (characterizing data gathering steps as insignificant extra-solution activity); see also CyberSource, 654 F.3d at 1371–72 (noting that even if some physical steps are required to obtain information from a database (e.g., entering a query via a keyboard, clicking a mouse), such data-gathering steps cannot alone confer patentability); OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering). Accord Guidance, 84 Fed. Reg. at 55 (citing MPEP § 2106.05(g)). Furthermore, the limitation “providing information about the selection area to the mobile device, the information about the selecting area instructing the mobile device to correspondingly take the selection area into account based on the navigating” is insignificant post-solution activity. The Supreme Court guides that the “prohibition against patenting abstract ideas ‘cannot be circumvented by attempting to limit the use of the formula to a particular technological environment' or [by] adding ‘insignificant postsolution activity.' ” Bilski, 561 U.S. at 610–11 (quoting Diehr, 450 U.S. at 191–92). Providing information to a mobile device is mere insignificant extra-solution activity, as supported by the MPEP 2106.05(g), see printing or downloading generated menus, Ameranth, 842 F.3d at 1241-42, 120 USPQ2d at 1854-55. Mere instruction to apply an exception using generic computer components cannot provide an inventive concept. In addition, merely “[u]sing a computer to accelerate an ineligible mental process does not make that process patent-eligible.” Bancorp Servs., L.L.C. v. Sun Life Assur. Co. of Canada (U.S.), 687 F.3d 1266, 1279 (Fed. Cir. 2012); see also CLS Bank Int’l v. Alice Corp. Pty. Ltd., 717 F.3d 1269, 1286 (Fed. Cir. 2013) (en banc) (“simply appending generic computer functionality to lend speed or efficiency to the performance of an otherwise abstract concept does not meaningfully limit claim scope for purposes of patent eligibility.”), aff’d, 573 U.S. 208 (2014). Accordingly, the additional element of a processor does not transform the abstract idea into a practical application of the abstract idea. Revised Guidance Step 2B Under the 2019 PEG step 2B analysis, the additional elements are evaluated to determine whether they amount to something “significantly more” than the recited abstract idea. (i.e., an innovative concept). Here, the additional elements, such as: a sensor, control unit, and memory does not amount to an innovative concept since, as stated above in the step 2A, Prong 2 analysis, the claims are simply using the additional elements as a tool to carry out the abstract idea (i.e., “apply it”) on a computer or computing device and/or via software programming. (See, e.g., MPEP §2106.05(f)). The additional elements are specified at a high level of generality to simply implement the abstract idea and are not themselves being technologically improved. (See, e.g., MPEP §2106.05 I.A.). See Alice, 573 U.S. at 223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention.”). Thus, these elements, taken individually or together, do not amount to “significantly more” than the abstract ideas themselves. The additional elements of the dependent claims 2-11 and 14 merely refine and further limit the abstract idea of the independent claims and do not add any feature that is an “inventive concept” which cures the deficiencies of their respective parent claim under the 2019 PEG analysis. None of the dependent claims considered individually, including their respective limitations, include an “inventive concept” of some additional element or combination of elements sufficient to ensure that the claims in practice amount to something “significantly more” than patent-ineligible subject matter to which the claims are directed. The elements of the instant claimed invention, when taken in combination do not offer substantially more than the sum of the functions of the elements when each is taken alone. The claims as a whole, do not amount to significantly more than the abstract idea itself because the claims do not effect an improvement to another technology or technical field; the claims do not amount to an improvement to the functioning of an electronic device itself which implements the abstract idea (e.g., the general purpose computer and/or the computer system which implements the process are not made more efficient or technologically improved); the claims do not perform a transformation or reduction of a particular article to a different state or thing (i.e., the claims do not use the abstract idea in the claimed process to bring about a physical change. See, e.g., Diamond v. Diehr, 450 U.S. 175 (1981), where a physical change, and thus patentability, was imparted by the claimed process; contrast, Parker v. Flook, 437 U.S. 584 (1978), where a physical change, and thus patentability, was not imparted by the claimed process); and the claims do not move beyond a general link of the use of the abstract idea to a particular technological environment (e.g., “for determining a selection area in an environment for a mobile device . . . sensors system” claim 1). Accordingly, claims 1-15 are rejected under 35 USC 101 as being drawn to an abstract idea without significantly more, and thus are ineligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 5, 7, and 10-15 are rejected under 35 U.S.C. 103 as being unpatentable over Cui (US 20200159246 A1) (“Cui”) in view of Hillen (US 20160027207 A1) (“Hillen”) in view of Abdelkader (US 20200174129 A1) (“Abdelkader”) in view of Abeling (US 20220205792 A1) (“Abeling”). With respect to claim 1, Cui teaches a method for determining a selection area in an environment for a mobile device, comprising the following steps: providing sensor data obtained using a sensor system in the environment, the sensor data characterizing a position and/or orientation of an entity1 in the environment; determining, based on the sensor data, the position and/or orientation of the entity in a map provided for navigation of the mobile device (See at least Cui FIG. 2 and Paragraphs 95-96 “In some applications, the mobile robot is a cleaning robot; step S520 further includes planning a navigation route which traverse a cleaning region covering the corresponding entity objects according to the entity object information and the projection position information thereof marked on the map. In some examples, a cleaning robot plans a navigation route traversing a cleaning region based on a pre-determined cleaning region, wherein according to entity object information on the map corresponding to the cleaning region, the cleaning robot determines a navigation route which is convenient for cleaning based on the corresponding entity object information … In the navigation method of the present application, an entity object which consistent with pre-marked entity object information is identified, and relative spatial position between the identified entity object and a mobile robot is determined based on images captured by an image acquisition device, so that the entity object information is marked on the map built with the method shown in FIG. 2, and a map marked with the entity object information is generated, thus during the subsequent movement and control, the mobile robot can identify destination position containing in user instruction based on entity object information marked on the map, and further moves to the position, thereby precision of the navigation route of the mobile robot, and human-computer interaction are improved”) Cui, however, fails to explicitly disclose providing specification data obtained using the sensor system, the specification data characterizing the selection area; determining the selection area in the map based on the specification data; and providing information about the selection area to the mobile device, the information about the selecting area instructing the mobile device to correspondingly take the selection area into account based on the navigating; wherein determining the position and/or orientation includes a coarse localization based on first sensor data and a fine localization based on further sensor data; wherein the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system. Hillen teaches providing specification data obtained using the sensor system, the specification data characterizing the selection area; determining the selection area in the map based on the specification data; and providing information about the selection area to the mobile device, the information about the selecting area instructing the mobile device to correspondingly take the selection area into account based on the navigating (See at least Hillen Paragraphs 55-67 “A device 1, for example, equipped with an on-board computer on which the calculations necessary for comparing the location data of the selected subarea 2 with the location data contained in the map for identifying subarea 2 is capable of carrying out the method. Mobile end device 4 may be for example a smartphone, a tablet PC or the like. Device 1 and mobile end device 4 communicate with each other via a wireless network connection, for example a wireless connection (WLAN or Bluetooth, etc.) … A map of the room must first be produced before it is even possible to compare the location data for selected subarea 2 with known location data for the room. The map is particularly advantageously a mosaic map, which is compiled from a number of single images and views the room floor from the bird's eye perspective. While the single images that are to form the map later are being collected, it is recommended to carry out a run of the device without cleaning/processing the room at the same time, to avoid damaging the subsurface if a wiping device is attached, for example … In the second step, the new single image at the calculated position and the calculated orientation is inserted in the existing map. The brightness of the single image to be included may also be adjusted, so that the brightness of the resulting map appears as uniform as possible … Integrating information on the height of an obstacle 5: The map that is compiled from the single images contains only the rectified view of the floor from device 1. However, since device 1 also maps the floor in subareas 2 that are not visible to humans, e.g., under tables and beds, the user may find it difficult to work out what he is looking at in the mosaic map. It is therefore helpful to include additional information in the map, to make the map easier to read for the user. This might include for example additional height information regarding obstacles 5 the device travels under, or also obstacles 5 close to the floor which it cannot travel under. The height of the obstacles 5 the device travels under may be calculated by triangulation from two single images, for example, if the movement of device 1 between the locations where the single images were taken is known … The comparison of the location data for selected subarea 2 with the location data contained in the map for identifying subarea 2 is carried out in similar fashion to the image registration described previously. In this process, the position determination is based for example on the calculation of matches between location data, such as local image features for example, or on a search process and comparison … In order to identify the subarea 2 selected by the user on the map, the photograph 3 taken by the user is compared with the map in various positions, in various orientations, and even with multiple perspective distortions. The position in the map for which the greatest correspondence is found, is then determined to be the position of subarea 2 … Communicating a user command regarding selected subarea 2: Device 1 must behave differently with respect to the selected subarea 2 depending on the cleaning or processing action the user wishes device 1 to perform. For example, it may be provided that device 1 travels to the selected subarea 2 and carries out a cleaning or processing activity. Or, it may also be provided that the selected subarea 2 is to be omitted from the cleaning or processing of the room by device 1. In the former case, device 1 travels straight to the subarea 2 in the room selected by the user as soon as subarea 2 has been identified. To do this, device 1 plans a route from its current position to the desired subarea 2, and traverses this route autonomously. Upon reaching the desired subarea 2, device 1 begins the cleaning or processing activity according to the specified size of subarea 2.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Cui to include providing specification data obtained using the sensor system, the specification data characterizing the selection area; determining the selection area in the map based on the specification data; and providing information about the selection area to the mobile device, the information about the selecting area instructing the mobile device to correspondingly take the selection area into account based on the navigating, as taught by Hillen as disclosed above, in order to ensure safe and efficient travel of the mobile device (Hillen Paragraph 3 “The invention relates to a method for cleaning or processing a room by means of an autonomously mobile device, wherein the method includes the following steps”). Cui in view of Hillen fail to explicitly disclose wherein determining the position and/or orientation includes a coarse localization based on first sensor data and a fine localization based on further sensor data; wherein the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system. Abdelkader teaches wherein determining the position and/or orientation includes a coarse localization based on first sensor data and a fine localization based on further sensor data (See at least Abdelkader FIG. 5 and Paragraph 76 “As illustrated in FIG. 5, the coarse-to-fine approach includes using the localization result from the depth camera (e.g., coarse localization) to constrain the search space of the 2D LIDAR measurements as the UAV 300 moves towards the perching point on the target 250. This constrained search space represents the portion of the depth camera FOV most likely to isolate the target 250 (or portion of interest of the target 250, such as the perching position). The 2D LIDAR measurements inside the constrained space are used to detect and localize the target 250 (e.g., fine localization), constraining both the size and direction (e.g., 1D scan) to perform the 2D LIDAR measurements.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Cui in view of Hillen to include that the position and/or orientation includes a coarse localization based on first sensor data and a fine localization based on further sensor data, as taught by Abdelkader as disclosed above, in order to ensure accurate entity location (Abdelkader Paragraph 68 “Therefore, by combining the two sensors, ambiguity of object detection and localization can be avoided by using a depth camera at far distances from the target 250, and accuracy and precision of alignment and perching can be improved by using a 2D LIDAR system at close distances to the target 250. In addition, processing power by the control circuit is reduced by consuming only one sensor's output in each of the two independent steps.”). Cui in view of Hillen in view of Abdelkader fail to explicitly disclose that the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system. Abeling teaches that the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system (See at least Abeling Paragraph 3 ““A first map” and/or “a second map” is, for example, to be understood to mean a digital map that is present in the form of (map) data values on a memory medium. This map is designed, for example, in such a way that one or multiple map layers are encompassed, one map layer showing a map from the bird's eye perspective (courses and positions of streets, buildings, landscape features, etc.), for example. This corresponds to a map of a navigation system, for example. Another map layer includes, for example, a radar map, the surroundings features, which are displayed by the radar map, being stored together with a radar signature. Another map layer includes, for example, a LIDAR map, the surroundings features, which are displayed by the LIDAR map, being stored together with a LIDAR point cloud and/or LIDAR objects. Another map layer encompasses, for example, a video map, the surroundings features, which are displayed by the video map, being stored together with objects recognizable by a video sensor”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Cui in view of Hillen in view of Abdelkader to include that the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system, as taught by Abeling as disclosed above, in order to ensure accurate obstacle detection (Abeling Paragraph 12 “With the aid of the method according to the present invention, the object of improving the reliability and precision of map data is advantageously achieved. ”). With respect to claim 2, Cui in view of Hillen in view of Abdelkader in view of Abeling teach that the mobile device is a robot (See at least Cui Paragraph 14 “In some embodiments, the mobile robot is a cleaning robot;”). With respect to claim 3, Cui in view of Hillen in view of Abdelkader in view of Abeling teach that the entity includes a mobile terminal device, including a smartphone or a tablet, the mobile terminal device having at least part of the sensor system (See at least Hillen Paragraphs 51-55 “Although a device 1 with only a single (i.e., monocular) camera is described in the figures, other camera systems may also be used. To this end, for example, omnidirectional camera systems with a very wide horizontal field of view, typically 360° all-round vision, or stereo camera systems with two or more cameras may be used … A device 1, for example, equipped with an on-board computer on which the calculations necessary for comparing the location data of the selected subarea 2 with the location data contained in the map for identifying subarea 2 is capable of carrying out the method. Mobile end device 4 may be for example a smartphone, a tablet PC or the like. Device 1 and mobile end device 4 communicate with each other via a wireless network connection, for example a wireless connection (WLAN or Bluetooth, etc.). In this context, autonomously mobile device 1 and the user's mobile end device 4 may communicate with each other directly (peer to peer connection) or by registering device 1 and end device 4 on a network, that is to say on an external computer system”). With respect to claim 5, Cui in view of Hillen in view of Abdelkader in view of Abeling teach that the entity includes a contamination or an object in the environment related to the selection area to be determined (See at least Hillen Paragraph 46 “The room part shown includes a subarea 2 of a room with an obstacle 5, specifically a wall. However, obstacle 5 may also be another object that device 1 is not able to negotiate, for example beds, cupboards and the like, the ground clearance of which is not higher than device 1, so device 1 cannot pass underneath these obstacles 5”). With respect to claim 7, Cui in view of Hillen in view of Abdelkader in view of Abeling teach that the sensor system includes a camera, and the sensor data and/or the specification data include images acquired by the camera (See at least Hillen Paragraphs 51-55 “Although a device 1 with only a single (i.e., monocular) camera is described in the figures, other camera systems may also be used. To this end, for example, omnidirectional camera systems with a very wide horizontal field of view, typically 360° all-round vision, or stereo camera systems with two or more cameras may be used …”). With respect to claim 10, Cui in view of Hillen in view of Abdelkader in view of Abeling teach the specification data include images acquired by the camera, and the method further comprises: providing, in the images acquired by the camera, additional information that characterizes the selection area, the additional information including edges and/or an area of the selection area, wherein the selection area being determined based on the specification data and the additional information (See at least Hillen Paragraph 61 “Integrating information on the height of an obstacle 5: The map that is compiled from the single images contains only the rectified view of the floor from device 1. However, since device 1 also maps the floor in subareas 2 that are not visible to humans, e.g., under tables and beds, the user may find it difficult to work out what he is looking at in the mosaic map. It is therefore helpful to include additional information in the map, to make the map easier to read for the user. This might include for example additional height information regarding obstacles 5 the device travels under, or also obstacles 5 close to the floor which it cannot travel under. The height of the obstacles 5 the device travels under may be calculated by triangulation from two single images, for example, if the movement of device 1 between the locations where the single images were taken is known. This is shown in FIG. 5. In order to simplify the map display for the user, the height of obstacle 5 that is calculated by this method may be assigned to a pre-set category, for example “close to the ground” (travel underneath is not possible), “low” (under beds or cupboards), “medium” (under chairs), or “high” (under tables). Height indications may be colour coded on the map, for example. The map created in this way may be stored in device 1 itself, for example, but alternatively it is also possible to stored it on the user's mobile end device 4.”). With respect to claim 11, Cui in view of Hillen in view of Abdelkader in view of Abeling teach (i) the selection area includes an area to be processed by the mobile device (See at least Hillen Paragraphs 16-22 “Creating a map of the room, Storing the map of the room in a data memory, Selection of a subarea of the room by a user, Transmitting the location data for the selected subarea to a processing unit connected to the data memory, Comparing the location data for the selected subarea with the location data contained in the map for identification of the subarea by the processing unit, Cleaning or processing the room taking into account a user command with regard to the selected subarea”), or (ii) the selection area includes an area in which the mobile device is not permitted to move or is not intended to move. With respect to claim 12, Cui teaches a system for data processing, the system comprising: a processor configured to determine a selection area in an environment for a mobile device, the processor configured to: provide sensor data obtained using a sensor system in the environment, the sensor data characterizing a position and/or orientation of an entity2 in the environment; determine, based on the sensor data, the position and/or orientation of the entity in a map provided for navigation of the mobile device (See at least Cui FIG. 2 and Paragraphs 95-96 “In some applications, the mobile robot is a cleaning robot; step S520 further includes planning a navigation route which traverse a cleaning region covering the corresponding entity objects according to the entity object information and the projection position information thereof marked on the map. In some examples, a cleaning robot plans a navigation route traversing a cleaning region based on a pre-determined cleaning region, wherein according to entity object information on the map corresponding to the cleaning region, the cleaning robot determines a navigation route which is convenient for cleaning based on the corresponding entity object information … In the navigation method of the present application, an entity object which consistent with pre-marked entity object information is identified, and relative spatial position between the identified entity object and a mobile robot is determined based on images captured by an image acquisition device, so that the entity object information is marked on the map built with the method shown in FIG. 2, and a map marked with the entity object information is generated, thus during the subsequent movement and control, the mobile robot can identify destination position containing in user instruction based on entity object information marked on the map, and further moves to the position, thereby precision of the navigation route of the mobile robot, and human-computer interaction are improved”) Cui, however, fails to explicitly disclose provide specification data obtained using the sensor system, the specification data characterizing the selection area; determine the selection area in the map based on the specification data; and provide information about the selection area to the mobile device, the information about the selecting area instructing the mobile device to correspondingly take the selection area into account based on the navigating; and the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system. Hillen teaches provide specification data obtained using the sensor system, the specification data characterizing the selection area; determine the selection area in the map based on the specification data; and provide information about the selection area to the mobile device, the information about the selecting area instructing the mobile device to correspondingly take the selection area into account based on the navigating (See at least Hillen Paragraphs 55-67 “A device 1, for example, equipped with an on-board computer on which the calculations necessary for comparing the location data of the selected subarea 2 with the location data contained in the map for identifying subarea 2 is capable of carrying out the method. Mobile end device 4 may be for example a smartphone, a tablet PC or the like. Device 1 and mobile end device 4 communicate with each other via a wireless network connection, for example a wireless connection (WLAN or Bluetooth, etc.) … A map of the room must first be produced before it is even possible to compare the location data for selected subarea 2 with known location data for the room. The map is particularly advantageously a mosaic map, which is compiled from a number of single images and views the room floor from the bird's eye perspective. While the single images that are to form the map later are being collected, it is recommended to carry out a run of the device without cleaning/processing the room at the same time, to avoid damaging the subsurface if a wiping device is attached, for example … In the second step, the new single image at the calculated position and the calculated orientation is inserted in the existing map. The brightness of the single image to be included may also be adjusted, so that the brightness of the resulting map appears as uniform as possible … Integrating information on the height of an obstacle 5: The map that is compiled from the single images contains only the rectified view of the floor from device 1. However, since device 1 also maps the floor in subareas 2 that are not visible to humans, e.g., under tables and beds, the user may find it difficult to work out what he is looking at in the mosaic map. It is therefore helpful to include additional information in the map, to make the map easier to read for the user. This might include for example additional height information regarding obstacles 5 the device travels under, or also obstacles 5 close to the floor which it cannot travel under. The height of the obstacles 5 the device travels under may be calculated by triangulation from two single images, for example, if the movement of device 1 between the locations where the single images were taken is known … The comparison of the location data for selected subarea 2 with the location data contained in the map for identifying subarea 2 is carried out in similar fashion to the image registration described previously. In this process, the position determination is based for example on the calculation of matches between location data, such as local image features for example, or on a search process and comparison … In order to identify the subarea 2 selected by the user on the map, the photograph 3 taken by the user is compared with the map in various positions, in various orientations, and even with multiple perspective distortions. The position in the map for which the greatest correspondence is found, is then determined to be the position of subarea 2 … Communicating a user command regarding selected subarea 2: Device 1 must behave differently with respect to the selected subarea 2 depending on the cleaning or processing action the user wishes device 1 to perform. For example, it may be provided that device 1 travels to the selected subarea 2 and carries out a cleaning or processing activity. Or, it may also be provided that the selected subarea 2 is to be omitted from the cleaning or processing of the room by device 1. In the former case, device 1 travels straight to the subarea 2 in the room selected by the user as soon as subarea 2 has been identified. To do this, device 1 plans a route from its current position to the desired subarea 2, and traverses this route autonomously. Upon reaching the desired subarea 2, device 1 begins the cleaning or processing activity according to the specified size of subarea 2.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Cui to include provide specification data obtained using the sensor system, the specification data characterizing the selection area; determine the selection area in the map based on the specification data; and provide information about the selection area to the mobile device, the information about the selecting area instructing the mobile device to correspondingly take the selection area into account based on the navigating, as taught by Hillen as disclosed above, in order to ensure safe and efficient travel of the mobile device (Hillen Paragraph 3 “The invention relates to a method for cleaning or processing a room by means of an autonomously mobile device, wherein the method includes the following steps”). Cui in view of Hillen fail to explicitly disclose wherein determining the position and/or orientation includes a coarse localization based on first sensor data and a fine localization based on further sensor data; and the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system. Abdelkader teaches wherein determining the position and/or orientation includes a coarse localization based on first sensor data and a fine localization based on further sensor data (See at least Abdelkader FIG. 5 and Paragraph 76 “As illustrated in FIG. 5, the coarse-to-fine approach includes using the localization result from the depth camera (e.g., coarse localization) to constrain the search space of the 2D LIDAR measurements as the UAV 300 moves towards the perching point on the target 250. This constrained search space represents the portion of the depth camera FOV most likely to isolate the target 250 (or portion of interest of the target 250, such as the perching position). The 2D LIDAR measurements inside the constrained space are used to detect and localize the target 250 (e.g., fine localization), constraining both the size and direction (e.g., 1D scan) to perform the 2D LIDAR measurements.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Cui in view of Hillen to include that the position and/or orientation includes a coarse localization based on first sensor data and a fine localization based on further sensor data, as taught by Abdelkader as disclosed above, in order to ensure accurate entity location (Abdelkader Paragraph 68 “Therefore, by combining the two sensors, ambiguity of object detection and localization can be avoided by using a depth camera at far distances from the target 250, and accuracy and precision of alignment and perching can be improved by using a 2D LIDAR system at close distances to the target 250. In addition, processing power by the control circuit is reduced by consuming only one sensor's output in each of the two independent steps.”). Cui in view of Hillen in view of Abdelkader fail to explicitly disclose that the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system. Abeling teaches that the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system (See at least Abeling Paragraph 3 ““A first map” and/or “a second map” is, for example, to be understood to mean a digital map that is present in the form of (map) data values on a memory medium. This map is designed, for example, in such a way that one or multiple map layers are encompassed, one map layer showing a map from the bird's eye perspective (courses and positions of streets, buildings, landscape features, etc.), for example. This corresponds to a map of a navigation system, for example. Another map layer includes, for example, a radar map, the surroundings features, which are displayed by the radar map, being stored together with a radar signature. Another map layer includes, for example, a LIDAR map, the surroundings features, which are displayed by the LIDAR map, being stored together with a LIDAR point cloud and/or LIDAR objects. Another map layer encompasses, for example, a video map, the surroundings features, which are displayed by the video map, being stored together with objects recognizable by a video sensor”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Cui in view of Hillen in view of Abdelkader to include that the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system, as taught by Abeling as disclosed above, in order to ensure accurate obstacle detection (Abeling Paragraph 12 “With the aid of the method according to the present invention, the object of improving the reliability and precision of map data is advantageously achieved. ”). With respect to claim 13, Cui teaches a mobile device, comprising: a control or regulating unit; and a driver configured to move the mobile device; wherein the mobile device is configured to obtain information about a selection area determined by: providing sensor data obtained using a sensor system in the environment, the sensor data characterizing a position and/or orientation of an entity3 in the environment; determining, based on the sensor data, the position and/or orientation of the entity in a map provided for navigation of the mobile device (See at least Cui FIG. 2 and Paragraphs 95-96 “In some applications, the mobile robot is a cleaning robot; step S520 further includes planning a navigation route which traverse a cleaning region covering the corresponding entity objects according to the entity object information and the projection position information thereof marked on the map. In some examples, a cleaning robot plans a navigation route traversing a cleaning region based on a pre-determined cleaning region, wherein according to entity object information on the map corresponding to the cleaning region, the cleaning robot determines a navigation route which is convenient for cleaning based on the corresponding entity object information … In the navigation method of the present application, an entity object which consistent with pre-marked entity object information is identified, and relative spatial position between the identified entity object and a mobile robot is determined based on images captured by an image acquisition device, so that the entity object information is marked on the map built with the method shown in FIG. 2, and a map marked with the entity object information is generated, thus during the subsequent movement and control, the mobile robot can identify destination position containing in user instruction based on entity object information marked on the map, and further moves to the position, thereby precision of the navigation route of the mobile robot, and human-computer interaction are improved”) Cui, however, fails to explicitly disclose providing specification data obtained using the sensor system, the specification data characterizing the selection area; determining the selection area in the map based on the specification data; and providing information about the selection area to the mobile device, the information about the selecting area instructing the mobile device to correspondingly take the selection area into account based on the navigating; and the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system. Hillen teaches providing specification data obtained using the sensor system, the specification data characterizing the selection area; determining the selection area in the map based on the specification data; and providing information about the selection area to the mobile device, the information about the selecting area instructing the mobile device to correspondingly take the selection area into account based on the navigating (See at least Hillen Paragraphs 55-67 “A device 1, for example, equipped with an on-board computer on which the calculations necessary for comparing the location data of the selected subarea 2 with the location data contained in the map for identifying subarea 2 is capable of carrying out the method. Mobile end device 4 may be for example a smartphone, a tablet PC or the like. Device 1 and mobile end device 4 communicate with each other via a wireless network connection, for example a wireless connection (WLAN or Bluetooth, etc.) … A map of the room must first be produced before it is even possible to compare the location data for selected subarea 2 with known location data for the room. The map is particularly advantageously a mosaic map, which is compiled from a number of single images and views the room floor from the bird's eye perspective. While the single images that are to form the map later are being collected, it is recommended to carry out a run of the device without cleaning/processing the room at the same time, to avoid damaging the subsurface if a wiping device is attached, for example … In the second step, the new single image at the calculated position and the calculated orientation is inserted in the existing map. The brightness of the single image to be included may also be adjusted, so that the brightness of the resulting map appears as uniform as possible … Integrating information on the height of an obstacle 5: The map that is compiled from the single images contains only the rectified view of the floor from device 1. However, since device 1 also maps the floor in subareas 2 that are not visible to humans, e.g., under tables and beds, the user may find it difficult to work out what he is looking at in the mosaic map. It is therefore helpful to include additional information in the map, to make the map easier to read for the user. This might include for example additional height information regarding obstacles 5 the device travels under, or also obstacles 5 close to the floor which it cannot travel under. The height of the obstacles 5 the device travels under may be calculated by triangulation from two single images, for example, if the movement of device 1 between the locations where the single images were taken is known … The comparison of the location data for selected subarea 2 with the location data contained in the map for identifying subarea 2 is carried out in similar fashion to the image registration described previously. In this process, the position determination is based for example on the calculation of matches between location data, such as local image features for example, or on a search process and comparison … In order to identify the subarea 2 selected by the user on the map, the photograph 3 taken by the user is compared with the map in various positions, in various orientations, and even with multiple perspective distortions. The position in the map for which the greatest correspondence is found, is then determined to be the position of subarea 2 … Communicating a user command regarding selected subarea 2: Device 1 must behave differently with respect to the selected subarea 2 depending on the cleaning or processing action the user wishes device 1 to perform. For example, it may be provided that device 1 travels to the selected subarea 2 and carries out a cleaning or processing activity. Or, it may also be provided that the selected subarea 2 is to be omitted from the cleaning or processing of the room by device 1. In the former case, device 1 travels straight to the subarea 2 in the room selected by the user as soon as subarea 2 has been identified. To do this, device 1 plans a route from its current position to the desired subarea 2, and traverses this route autonomously. Upon reaching the desired subarea 2, device 1 begins the cleaning or processing activity according to the specified size of subarea 2.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the apparatus of Cui to include providing specification data obtained using the sensor system, the specification data characterizing the selection area; determining the selection area in the map based on the specification data; and providing information about the selection area to the mobile device, the information about the selecting area instructing the mobile device to correspondingly take the selection area into account based on the navigating, as taught by Hillen as disclosed above, in order to ensure safe and efficient travel of the mobile device (Hillen Paragraph 3 “The invention relates to a method for cleaning or processing a room by means of an autonomously mobile device, wherein the method includes the following steps”). Cui in view of Hillen fail to explicitly disclose wherein determining the position and/or orientation includes a coarse localization based on first sensor data and a fine localization based on further sensor data; and the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system. Abdelkader teaches wherein determining the position and/or orientation includes a coarse localization based on first sensor data and a fine localization based on further sensor data (See at least Abdelkader FIG. 5 and Paragraph 76 “As illustrated in FIG. 5, the coarse-to-fine approach includes using the localization result from the depth camera (e.g., coarse localization) to constrain the search space of the 2D LIDAR measurements as the UAV 300 moves towards the perching point on the target 250. This constrained search space represents the portion of the depth camera FOV most likely to isolate the target 250 (or portion of interest of the target 250, such as the perching position). The 2D LIDAR measurements inside the constrained space are used to detect and localize the target 250 (e.g., fine localization), constraining both the size and direction (e.g., 1D scan) to perform the 2D LIDAR measurements.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Cui in view of Hillen to include that the position and/or orientation includes a coarse localization based on first sensor data and a fine localization based on further sensor data, as taught by Abdelkader as disclosed above, in order to ensure accurate entity location (Abdelkader Paragraph 68 “Therefore, by combining the two sensors, ambiguity of object detection and localization can be avoided by using a depth camera at far distances from the target 250, and accuracy and precision of alignment and perching can be improved by using a 2D LIDAR system at close distances to the target 250. In addition, processing power by the control circuit is reduced by consuming only one sensor's output in each of the two independent steps.”). Cui in view of Hillen in view of Abdelkader fail to explicitly disclose that the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system. Abeling teaches that the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system (See at least Abeling Paragraph 3 ““A first map” and/or “a second map” is, for example, to be understood to mean a digital map that is present in the form of (map) data values on a memory medium. This map is designed, for example, in such a way that one or multiple map layers are encompassed, one map layer showing a map from the bird's eye perspective (courses and positions of streets, buildings, landscape features, etc.), for example. This corresponds to a map of a navigation system, for example. Another map layer includes, for example, a radar map, the surroundings features, which are displayed by the radar map, being stored together with a radar signature. Another map layer includes, for example, a LIDAR map, the surroundings features, which are displayed by the LIDAR map, being stored together with a LIDAR point cloud and/or LIDAR objects. Another map layer encompasses, for example, a video map, the surroundings features, which are displayed by the video map, being stored together with objects recognizable by a video sensor”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the apparatus of Cui in view of Hillen in view of Abdelkader to include that the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system, as taught by Abeling as disclosed above, in order to ensure accurate obstacle detection (Abeling Paragraph 12 “With the aid of the method according to the present invention, the object of improving the reliability and precision of map data is advantageously achieved. ”). With respect to claim 14, Cui in view of Hillen in view of Abdelkader in view of Abeling teach that the mobile device is a robot, or a household robot, or a cleaning robot, or a floor or street cleaning device, or a lawn mowing robot, or a service robot, or a construction robot, or a vehicle moving in an at least partially automated manner, or a passenger transport vehicle, or a goods transport vehicle, or a drone (See at least Cui Paragraph 14 “In some embodiments, the mobile robot is a cleaning robot;” | Paragraph 159 “In step S820, the movement device is controlled according to the navigation route to adjust position and pose of the mobile robot, so that the mobile robot moves autonomously along the navigation route”). With respect to claim 15, Cui teaches a non-transitory computer-readable storage medium on which is stored a computer program for determining a selection area in an environment for a mobile device, the computer program, when executed by a computer, causing the computer to perform the following steps: providing sensor data obtained using a sensor system in the environment, the sensor data characterizing a position and/or orientation of an entity4 in the environment; determining, based on the sensor data, the position and/or orientation of the entity in a map provided for navigation of the mobile device (See at least Cui FIG. 2 and Paragraphs 95-96 “In some applications, the mobile robot is a cleaning robot; step S520 further includes planning a navigation route which traverse a cleaning region covering the corresponding entity objects according to the entity object information and the projection position information thereof marked on the map. In some examples, a cleaning robot plans a navigation route traversing a cleaning region based on a pre-determined cleaning region, wherein according to entity object information on the map corresponding to the cleaning region, the cleaning robot determines a navigation route which is convenient for cleaning based on the corresponding entity object information … In the navigation method of the present application, an entity object which consistent with pre-marked entity object information is identified, and relative spatial position between the identified entity object and a mobile robot is determined based on images captured by an image acquisition device, so that the entity object information is marked on the map built with the method shown in FIG. 2, and a map marked with the entity object information is generated, thus during the subsequent movement and control, the mobile robot can identify destination position containing in user instruction based on entity object information marked on the map, and further moves to the position, thereby precision of the navigation route of the mobile robot, and human-computer interaction are improved”) Cui, however, fails to explicitly disclose providing specification data obtained using the sensor system, the specification data characterizing the selection area; determining the selection area in the map based on the specification data; and providing information about the selection area to the mobile device, the information about the selecting area instructing the mobile device to correspondingly take the selection area into account based on the navigating; and the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system. Hillen teaches providing specification data obtained using the sensor system, the specification data characterizing the selection area; determining the selection area in the map based on the specification data; and providing information about the selection area to the mobile device, the information about the selecting area instructing the mobile device to correspondingly take the selection area into account based on the navigating (See at least Hillen Paragraphs 55-67 “A device 1, for example, equipped with an on-board computer on which the calculations necessary for comparing the location data of the selected subarea 2 with the location data contained in the map for identifying subarea 2 is capable of carrying out the method. Mobile end device 4 may be for example a smartphone, a tablet PC or the like. Device 1 and mobile end device 4 communicate with each other via a wireless network connection, for example a wireless connection (WLAN or Bluetooth, etc.) … A map of the room must first be produced before it is even possible to compare the location data for selected subarea 2 with known location data for the room. The map is particularly advantageously a mosaic map, which is compiled from a number of single images and views the room floor from the bird's eye perspective. While the single images that are to form the map later are being collected, it is recommended to carry out a run of the device without cleaning/processing the room at the same time, to avoid damaging the subsurface if a wiping device is attached, for example … In the second step, the new single image at the calculated position and the calculated orientation is inserted in the existing map. The brightness of the single image to be included may also be adjusted, so that the brightness of the resulting map appears as uniform as possible … Integrating information on the height of an obstacle 5: The map that is compiled from the single images contains only the rectified view of the floor from device 1. However, since device 1 also maps the floor in subareas 2 that are not visible to humans, e.g., under tables and beds, the user may find it difficult to work out what he is looking at in the mosaic map. It is therefore helpful to include additional information in the map, to make the map easier to read for the user. This might include for example additional height information regarding obstacles 5 the device travels under, or also obstacles 5 close to the floor which it cannot travel under. The height of the obstacles 5 the device travels under may be calculated by triangulation from two single images, for example, if the movement of device 1 between the locations where the single images were taken is known … The comparison of the location data for selected subarea 2 with the location data contained in the map for identifying subarea 2 is carried out in similar fashion to the image registration described previously. In this process, the position determination is based for example on the calculation of matches between location data, such as local image features for example, or on a search process and comparison … In order to identify the subarea 2 selected by the user on the map, the photograph 3 taken by the user is compared with the map in various positions, in various orientations, and even with multiple perspective distortions. The position in the map for which the greatest correspondence is found, is then determined to be the position of subarea 2 … Communicating a user command regarding selected subarea 2: Device 1 must behave differently with respect to the selected subarea 2 depending on the cleaning or processing action the user wishes device 1 to perform. For example, it may be provided that device 1 travels to the selected subarea 2 and carries out a cleaning or processing activity. Or, it may also be provided that the selected subarea 2 is to be omitted from the cleaning or processing of the room by device 1. In the former case, device 1 travels straight to the subarea 2 in the room selected by the user as soon as subarea 2 has been identified. To do this, device 1 plans a route from its current position to the desired subarea 2, and traverses this route autonomously. Upon reaching the desired subarea 2, device 1 begins the cleaning or processing activity according to the specified size of subarea 2.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the apparatus of Cui to include providing specification data obtained using the sensor system, the specification data characterizing the selection area; determining the selection area in the map based on the specification data; and providing information about the selection area to the mobile device, the information about the selecting area instructing the mobile device to correspondingly take the selection area into account based on the navigating, as taught by Hillen as disclosed above, in order to ensure safe and efficient travel of the mobile device (Hillen Paragraph 3 “The invention relates to a method for cleaning or processing a room by means of an autonomously mobile device, wherein the method includes the following steps”). Cui in view of Hillen fail to explicitly disclose wherein determining the position and/or orientation includes a coarse localization based on first sensor data and a fine localization based on further sensor data; and the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system. Abdelkader teaches wherein determining the position and/or orientation includes a coarse localization based on first sensor data and a fine localization based on further sensor data (See at least Abdelkader FIG. 5 and Paragraph 76 “As illustrated in FIG. 5, the coarse-to-fine approach includes using the localization result from the depth camera (e.g., coarse localization) to constrain the search space of the 2D LIDAR measurements as the UAV 300 moves towards the perching point on the target 250. This constrained search space represents the portion of the depth camera FOV most likely to isolate the target 250 (or portion of interest of the target 250, such as the perching position). The 2D LIDAR measurements inside the constrained space are used to detect and localize the target 250 (e.g., fine localization), constraining both the size and direction (e.g., 1D scan) to perform the 2D LIDAR measurements.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Cui in view of Hillen to include that the position and/or orientation includes a coarse localization based on first sensor data and a fine localization based on further sensor data, as taught by Abdelkader as disclosed above, in order to ensure accurate entity location (Abdelkader Paragraph 68 “Therefore, by combining the two sensors, ambiguity of object detection and localization can be avoided by using a depth camera at far distances from the target 250, and accuracy and precision of alignment and perching can be improved by using a 2D LIDAR system at close distances to the target 250. In addition, processing power by the control circuit is reduced by consuming only one sensor's output in each of the two independent steps.”). Cui in view of Hillen in view of Abdelkader fail to explicitly disclose that the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system. Abeling teaches that the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system (See at least Abeling Paragraph 3 ““A first map” and/or “a second map” is, for example, to be understood to mean a digital map that is present in the form of (map) data values on a memory medium. This map is designed, for example, in such a way that one or multiple map layers are encompassed, one map layer showing a map from the bird's eye perspective (courses and positions of streets, buildings, landscape features, etc.), for example. This corresponds to a map of a navigation system, for example. Another map layer includes, for example, a radar map, the surroundings features, which are displayed by the radar map, being stored together with a radar signature. Another map layer includes, for example, a LIDAR map, the surroundings features, which are displayed by the LIDAR map, being stored together with a LIDAR point cloud and/or LIDAR objects. Another map layer encompasses, for example, a video map, the surroundings features, which are displayed by the video map, being stored together with objects recognizable by a video sensor”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Cui in view of Hillen in view of Abdelkader to include that the map includes annotations that are compatible with the sensor data, including at least one of camera images, Wi-Fi signatures, or environmental features detectable by the sensor system, as taught by Abeling as disclosed above, in order to ensure accurate obstacle detection (Abeling Paragraph 12 “With the aid of the method according to the present invention, the object of improving the reliability and precision of map data is advantageously achieved. ”). Claims 4 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Cui (US 20200159246 A1) (“Cui”) in view of Hillen (US 20160027207 A1) (“Hillen”) in view of Abdelkader (US 20200174129 A1) (“Abdelkader”) in view of Abeling (US 20220205792 A1) (“Abeling”) further in view of Ma’as (US 20210191405 A1) (“Ma’as”). With respect to claim 4, Cui in view of Hillen in view of Abdelkader in view of Abeling fail to explicitly disclose that the entity includes a person in the environment. Ma’as teaches that the entity includes a person in the environment (See at least Ma’as FIG. 7B and Paragraph 89 “In these illustrations, how an autonomous robot performs when deployed in disastrous areas is described. First, the robot may start a rangefinder sensor of the robot, and start to map the building. Once the partial map of the building is generated, the robot may move around to complete the map. During the movement of the robot, the same method of detecting an object that is moving is calculated using the RNN method. The rangefinder sensor has an advantage compared to other methods in detecting an object because the rangefinder result is not interfered by the smoke. In case that the robot is using a camera to detect an object, the robot may hardly detect anything because of the dense of the smoke. The result of the robot perception may be transmitted over wireless and displayed on a mobile device screen. When the robot detects movement of an object, which depends on the probability result of MOT, the robot may approach the target object, or continue to search for the entire rooms in the building. In case of the robot detecting of high probability of a moving object, and the robot approaching the object, a button or voice may be used to confirm that the victims need help as shown in FIG. 7A. The robot then tries to find a route to the entrance of the building, based on the routes/paths that have been memorized since the robot entering the building. During the movement, the robot will maintain a distance with the moving object to ensure that the victim is able to follow the robot as shown in FIG. 7B.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Cui in view of Hillen in view of Abdelkader in view of Abeling to include that the entity includes a person in the environment, as taught by Ma’as as disclosed above, in order to ensure safe travel of the mobile device (Ma’as Paragraph 2 “The present application relates generally to an advanced intelligent remote sensing system for mobile robots, and more particularly to a method and a device for navigating autonomously through a dynamic/unstructured environment.”). With respect to claim 8, Cui in view of Hillen in view of Abdelkader in view of Abeling fail to explicitly disclose that the sensor data include first sensor data and further sensor data, and the determination of the position and/or orientation of the entity in the map includes: determining, based on the first sensor data, a coarse position and/or orientation of the entity in the map; and determining, based on the further sensor data and the coarse position and/or orientation and/or the first sensor data, a fine position and/or orientation of the entity in the map Ma’as teaches that the sensor data include first sensor data and further sensor data, and the determination of the position and/or orientation of the entity in the map includes: determining, based on the first sensor data, a coarse position and/or orientation of the entity in the map; and determining, based on the further sensor data and the coarse position and/or orientation and/or the first sensor data, a fine position and/or orientation of the entity in the map (See at least Ma’as FIG. 13 and Paragraph 110 “FIG. 13 discloses the third step of this disclosure on how the autonomous robot scans the environment again when the autonomous robot moves, and updates the map when new information is found. For example, the robot may move from (x0, y0) to another point. Then, perception of the robot would change as follows. After moving to a new point, the robot gets new perspective about the objects. The robot gets the right side of objects B and D so the robot gets full shapes of the objects B and D. Also it now can detect an object E. This new perspective is then updated to the map saved in the robot”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Cui in view of Hillen in view of Abdelkader in view of Abeling to include that the sensor data include first sensor data and further sensor data, and the determination of the position and/or orientation of the entity in the map includes: determining, based on the first sensor data, a coarse position and/or orientation of the entity in the map; and determining, based on the further sensor data and the coarse position and/or orientation and/or the first sensor data, a fine position and/or orientation of the entity in the map, as taught by Ma’as as disclosed above, in order to ensure safe travel of the mobile device (Ma’as Paragraph 2 “The present application relates generally to an advanced intelligent remote sensing system for mobile robots, and more particularly to a method and a device for navigating autonomously through a dynamic/unstructured environment.”). Claims 6 is rejected under 35 U.S.C. 103 as being unpatentable over Cui (US 20200159246 A1) (“Cui”) in view of Hillen (US 20160027207 A1) (“Hillen”) in view of Abdelkader (US 20200174129 A1) (“Abdelkader”) in view of Abeling (US 20220205792 A1) (“Abeling”) further in view of Jones (US 20190213438 A1) (“Jones”). With respect to claim 6, Cui in view of Hillen in view of Abdelkader in view of Abeling fail to explicitly disclose a stationary terminal device in the environment has at least part of the sensor system, wherein the stationary terminal device includes a smart home terminal device. Jones teaches a stationary terminal device in the environment has at least part of the sensor system, wherein the stationary terminal device includes a smart home terminal device (See at least Jones FIG. 3 and Paragraph 100 “Other devices also can be wirelessly linked to the mobile cleaning robot 102. In the example of FIG. 3, the home 300 includes linked devices 328A and 328B. In some implementations, each of the linked devices 328A and 328B includes, e.g., sensors suitable for performing one or more of monitoring the home 300, monitoring occupants of the home 300, and monitoring operations of the mobile cleaning robot 102. These sensors can include, for example, one or more of imaging sensors, occupancy sensors, and environmental sensors.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Cui in view of Hillen in view of Abdelkader in view of Abeling to include a stationary terminal device in the environment has at least part of the sensor system, wherein the stationary terminal device includes a smart home terminal device, as taught by Jones as disclosed above, in order to increase the detection of entities in the mobile device’s environment (Jones Paragraph 3 “In a general aspect, the description features a system for enabling a mobile robot to be aware of its surroundings and perform tasks taking into account of the characteristics of its surroundings”). Claims 9 is rejected under 35 U.S.C. 103 as being unpatentable over Cui (US 20200159246 A1) (“Cui”) in view of Hillen (US 20160027207 A1) (“Hillen”) in view of Abdelkader (US 20200174129 A1) (“Abdelkader”) in view of Abeling (US 20220205792 A1) (“Abeling”) further in view of Jones II (US 20100049365 A1) (“Jones II”). With respect to claim 9, Cui in view of Hillen in view of Abdelkader in view of Abeling fail to explicitly disclose that the specification data characterize a position and/or orientation of the mobile terminal device, and the method further comprises: providing additional information that characterizes the selection area, the additional information including a diameter of the selection area, in relation to the position and/or orientation of the mobile terminal device, and wherein the selection area is determined based on the specification data and the additional information. Jones II teaches that the specification data characterize a position and/or orientation of the mobile terminal device, and the method further comprises: providing additional information that characterizes the selection area, the additional information including a diameter of the selection area, in relation to the position and/or orientation of the mobile terminal device, and wherein the selection area is determined based on the specification data and the additional information (See at least Jones FIG. 13B and Paragraph 113 “FIG. 13B shows the movement of a preferred embodiment of robot 10, whereby the robot cycles between BOUNCE and WALL FOLLOWING behaviors. As the robot follows path 99, each time the robot 10 encounters a wall 100, the robot follows the wall for a distance equal to twice the robot's diameter. The portions of the path 99 in which the robot 10 operates in wall following mode are labeled 51. This method provides greatly increased coverage, along with attendant increases in cleaning rate and perceived effectiveness.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Cui in view of Hillen in view of Abdelkader in view of Abeling to include that the specification data characterize a position and/or orientation of the mobile terminal device, and the method further comprises: providing additional information that characterizes the selection area, the additional information including a diameter of the selection area, in relation to the position and/or orientation of the mobile terminal device, and wherein the selection area is determined based on the specification data and the additional information, as taught by Jones II as disclosed above, in order to ensure safe traversal of the mobile device (Jones Paragraph 2 “This invention relates generally to autonomous vehicles or robots, and more specifically to methods and mobile robotic devices for covering a specific area as might be required of, or used as, robotic cleaners or lawn mowers.”). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IBRAHIM ABDOALATIF ALSOMAIRY whose telephone number is (571)272-5653. The examiner can normally be reached M-F 7:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris Almatrahi can be reached at 313-446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IBRAHIM ABDOALATIF ALSOMAIRY/ Examiner, Art Unit 3667 /FARIS S ALMATRAHI/ Supervisory Patent Examiner, Art Unit 3667 1 There is no limiting definition as to what constitutes and entity. 2 There is no limiting definition as to what constitutes and entity. 3 There is no limiting definition as to what constitutes and entity. 4 There is no limiting definition as to what constitutes and entity.
Read full office action

Prosecution Timeline

Oct 02, 2023
Application Filed
Jan 25, 2025
Non-Final Rejection — §101, §103
May 05, 2025
Applicant Interview (Telephonic)
May 05, 2025
Examiner Interview Summary
May 23, 2025
Response Filed
Jun 08, 2025
Final Rejection — §101, §103
Sep 10, 2025
Response after Non-Final Action
Sep 29, 2025
Request for Continued Examination
Oct 09, 2025
Response after Non-Final Action
Oct 10, 2025
Non-Final Rejection — §101, §103
Jan 13, 2026
Response Filed
Mar 27, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602044
VEHICLE CONTROL SYSTEM, VEHICLE CONTROL METHOD, AND VEHICLE CONTROL PROGRAM
2y 5m to grant Granted Apr 14, 2026
Patent 12578728
AUTONOMOUS SNOW REMOVING MACHINE
2y 5m to grant Granted Mar 17, 2026
Patent 12426758
METHOD AND APPARATUS FOR CONTROLLING ROBOT, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Sep 30, 2025
Patent 12313379
SYSTEM FOR NEUTRALISING A TARGET USING A DRONE AND A MISSILE
2y 5m to grant Granted May 27, 2025
Patent 12265385
SYSTEMS, DEVICES, AND METHODS FOR MILLIMETER WAVE COMMUNICATION FOR UNMANNED AERIAL VEHICLES
2y 5m to grant Granted Apr 01, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
40%
Grant Probability
49%
With Interview (+8.4%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 82 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month