Prosecution Insights
Last updated: April 19, 2026
Application No. 18/980,678

ROBOT-FRIENDLY BUILDINGS, AND MAP GENERATION METHODS AND SYSTEMS FOR ROBOT OPERATION

Non-Final OA §101§103§DP
Filed
Dec 13, 2024
Examiner
ISMAIL, MAHMOUD S
Art Unit
3662
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Naver Corporation
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
689 granted / 778 resolved
+36.6% vs TC avg
Moderate +12% lift
Without
With
+11.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
39 currently pending
Career history
817
Total Applications
across all art units

Statute-Specific Performance

§101
15.4%
-24.6% vs TC avg
§103
43.7%
+3.7% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 778 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-18 are pending in Instant Application. Priority Examiner acknowledges Applicant’s claim to priority benefits of International Application No. PCT/KR2023/004438, filed April 3, 2023, which claims priority to Korean Application No. 10-2022-0072464, filed June 14, 2022. Information Disclosure Statement The information disclosure statement(s) (IDS) submitted on 12/13/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered if signed and initialed by the Examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: communication unit configured to receive in claim 18 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The following are the interpreted corresponding structures found within the specifications for some the above limitations: communication unit - Figure 4 - item 110, paragraph 0119 If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Double Patenting A rejection based on double patenting of the "same invention" type finds its support in the language of 35 U.S.C. 101 which states that "whoever invents or discovers any new and useful process ... may obtain a patent therefor ..." (Emphasis added). Thus, the term "same invention," in this context, means an invention drawn to identical subject matter. See Miller v. Eagle Mfg. Co., 151 U.S. 186 (1894); In re Ockert, 245 F.2d 467, 114 USPQ 330 (CCPA 1957); and In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970). A statutory type (35 U.S.C. 101) double patenting rejection can be overcome by canceling or amending the conflicting claims so they are no longer coextensive in scope. The filing of a terminal disclaimer cannot overcome a double patenting rejection based upon 35 U.S.C. 101. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b). Claims 1, 14, and 18 are provisionally rejected on the ground of non-statutory non-obviousness-type double patenting as being unpatentable over claims 1 and 18 of Yoon et al., co-pending Application 18/980,813. Although the claims at issue are not identical, they are not patentably distant from each other because they are drawn to obvious variations. In view of the above, since the subject matters recited in the claims 1, 14, and 18 of the instant application were fully disclosed in and covered by the claims 1 and 18 of US co-pending application 18/980,813, allowing the claims to result in an unjustified or improper timewise extension of the "right to exclude" granted by a patent. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims’ subject matter eligibility will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50-57 (January 7, 2019) (“2019 PEG”). With respect to claim 1, 14, and 18. Claims 1, 14, and 18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claims 1, 14, and 18 are directed to one of the statutory categories. Step 2A Prong One Analysis: the claim recites, inter alia: “allocating at least one graphic object on the specific map based on editing information received from the electronic device": A person of ordinary skill in the art can mentally allocate objects on a map. Thus, this limitation is construed to be directed to the abstract idea of mental processes. as drafted, is a process that, under its broadest reasonable interpretation, covers mental processes concepts performed in the human mind (including an observation, evaluation, judgment, opinion) but for the recitation of generic computer components. Accordingly, the claim recites an abstract idea. Step 2A Prong Two Analysis: This judicial exception is not integrated into a practical application. The only limitations not treated above, “receiving a map editing request for a specific floor among a plurality of floors of a building”, “providing an editing interface on a display unit of an electronic device in response to the map editing request, the editing interface including at least a part of a specific map corresponding to the specific floor”, and “updating the specific map on a cloud server based on completion of the allocating such that robots travel through the specific floor according to an attribute of the at least one graphic object”, involves the mere gathering of data, which is insignificant extra-solution activity. See MPEP § 2106.05(g). In particular, the claim only recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). The additional element of the “electronic device”, “communication unit”, and “processing circuitry” are recited at a high level of generality, and comprises only a processor to simply perform the generic computer functions. Generic computers performing generic computer functions, alone, do not amount to significantly more than the abstract idea. The generic computer components in these steps are recited at a high-level of generality (i.e., as a generic computer component performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using generic computer components to perform the abstract idea amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103(a) are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-18 are rejected under 35 U.S.C. 103 as being unpatentable over Hyun et al. (KR20190100118A) in view of Hum (KR20200015096A). As per claim 1, Hyun discloses a method of generating a map, comprising: receiving a map editing request for a specific floor among a plurality of floors of a building (see at least paragraph 0070; wherein the indoor map creation device (330) can call up a floor plan image of a building stored in a database (350) based on a plan registration request signal input to a user terminal (110, 500) (S110)); providing an editing interface on a display unit of an electronic device in response to the map editing request, the editing interface including at least a part of a specific map corresponding to the specific floor (see at least paragraph 0070; wherein the indoor map creation device (330) may also call up a floor plan image of a building stored in the memory (540) of the user terminal (110). The indoor map creation device (330) can display the above-mentioned called drawing image on the user interface screen of the user terminal (110)); allocating at least one graphic object on the specific map based on editing information received from the electronic device (see at least paragraph 0040; wherein the indoor map creation unit (330) can create an indoor map by drawing major facilities existing indoors (e.g., offices, restrooms, stairs, etc.) on a map corresponding to a specific building using at least one of polygons, polylines, points, nodes, and links). Hyun does not explicitly mention updating the specific map on a cloud server based on completion of the allocating such that robots travel through the specific floor according to an attribute of the at least one graphic object. However Hum does disclose: updating the specific map on a cloud server based on completion of the allocating such that robots travel through the specific floor according to an attribute of the at least one graphic object (see at least paragraphs 0061-0062; wherein the attribute information setting unit (123) sets attribute information to the basic map information adjusted by the scale correction unit. For example, attribute information may include virtual walls within a building, deceleration sections of an autonomous driving device (20), and obstacle information within a building…see at least paragraph 0104; wherein if the update information is generated, transmits the update information to the control server (30) (S260)). Therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in Hum with the teachings as in Hyun. The motivation for doing so would have been to improve loop closure errors, see Hum paragraphs 0006-0009. As per claim 2, Hum discloses wherein the at least one graphic object includes at least one of: an area graphic object for specifying a traveling mode of the robots for a specific area of the specific floor; a travel node graphic object corresponding to a travel node linked to travel of the robots, for configuring a traveling path of the robots; an operation node graphic object corresponding to an operation node linked to a specific operation of the robots; or a facility node graphic object corresponding to a facility node linked to a facility on the specific floor (see at least paragraph 0062; wherein a virtual area that the autonomous driving device (20) refers to for driving and operation, a basic entry prevention area, a driving caution area, information on the operation of the autonomous driving device (20), etc). As per claim 3, Hum discloses wherein the at least one graphic object includes the area graphic object; the allocating includes allocating the area graphic object to the specific area; and the updating causes the robots to travel the specific area according to travel characteristics of the traveling mode linked to the area graphic object (see at least paragraph 0102; wherein the device control unit (26) controls the driving unit (23) to initiate autonomous driving (S240)…see at least paragraph 0104; wherein the update information generation unit (25) compares the map information received from the map production device (10) with the spatial information generated by the spatial recognition unit (24), and the device control unit (26) determines whether update information is generated by the update information generation unit (25) (S250), and if the update information is generated, transmits the update information to the control server (30) (S260)). As per claim 4, Hyun discloses wherein the area graphic object is configured to have one type among a plurality of different types, each of the plurality of different types being linked to a different traveling mode (see at least paragraph 0039; wherein the user information collection unit (320) can perform a function of collecting map creation command signals input to the user terminal (110). At this time, the map creation command signals may include, but are not limited to, a building search command, a building add/delete/change command, a floor add/delete/change command, a polygon add/delete/change command, a polyline add/delete/change command, a point add/delete/change command, a node add/delete/change command, and a link add/delete/change command). As per claim 5, Hum discloses wherein the at least one graphic object includes a plurality of area graphic objects, each of the plurality of area graphic objects having a corresponding type among the plurality of different types; the plurality of different types include a first type of area graphic object and a second type of area graphic object, the first type of area graphic object being linked to a first traveling mode, and the second type of area graphic object being linked to a second traveling mode different from the first traveling mode; and a visual exterior appearance of the first type of area graphic object is different from a visual exterior appearance of the second type of area graphic object (see at least paragraph 0062; wherein the attribute information may include information on a virtual area that the autonomous driving device (20) refers to for driving and operation, a basic entry prevention area, a driving caution area, information on the operation of the autonomous driving device (20), etc. For example, attribute information may include virtual walls within a building, deceleration sections of an autonomous driving device (20), and obstacle information within a building). As per claim 6, Hum discloses wherein the plurality of area graphic objects includes a first area graphic object and a second area graphic object, the first area graphic object being of the first type of area graphic object, and the second area graphic object being of the second type of area graphic object; the allocating includes allocating the first area graphic object to a first specific area of the specific floor, and allocating the second area graphic object to a second specific area of the specific floor; and the updating causes the robots to travel according to the first traveling mode based on entering the first specific area, travel according to a third traveling mode based on exiting the first specific area, the third traveling mode being a traveling mode the robots had before entering the first specific area, travel according to the second traveling mode based on entering the second specific area, and travel according to a fourth traveling mode based on exiting the second specific area, the second traveling mode being a traveling mode the robots had before entering the second specific area (see at least paragraph 0062; wherein the attribute information may include information on a virtual area that the autonomous driving device (20) refers to for driving and operation, a basic entry prevention area, a driving caution area, information on the operation of the autonomous driving device (20), etc. For example, attribute information may include virtual walls within a building, deceleration sections of an autonomous driving device (20), and obstacle information within a building). As per claim 7, Hyun discloses wherein the editing information includes information for specifying at least one of: a placement position of the area graphic object on the specific map; a type of the area graphic object; a size of the area graphic object; or a shape of the area graphic object (see at least paragraph 0050; wherein the drawing image processing unit (333) can perform a function of changing the position, size, shape, etc. of the drawing image displayed on the user terminal (110) based on the drawing control signal of the terminal user. The drawing image displayed on the user terminal (110) can be changed in position, size, shape, etc. so that the outline of the drawing image and the outline of the building corresponding to the drawing image match each other according to the drawing control signal of the terminal user). As per claim 8, Hyun discloses wherein the editing interface includes a first interface area and a second interface area, the first interface area including the specific map, and the second interface area including a setting menu for a setting related to editing of the specific map; and the editing information is based on a user input that is input to at least one of the first interface area or the second interface area (see at least paragraph 0029; wherein the map creation screen (200) displayed on the user terminal (110) may include a first display area (210) for displaying a plurality of main menus for creating an outdoor map and/or an indoor map, a second display area (220) for displaying map data regarding a specific geographic area, and a third display area (230) for displaying detailed information regarding a specific menu). As per claim 9, Hyun discloses wherein the editing information is based on at least one of: a first user input to the first interface area specifying a positioning area where the area graphic object is to be positioned; a second user input to the first interface area specifying a size of the area graphic object; or a third user input to the first interface area specifying a shape of the area graphic object (see at least paragraph 0040; wherein the indoor map creation unit (330) can perform the function of creating an indoor map of a specific building based on map creation command signals transmitted from the user information collection unit (320). That is, the indoor map creation unit (330) can create an indoor map by drawing major facilities existing indoors (e.g., offices, restrooms, stairs, etc.) on a map corresponding to a specific building using at least one of polygons, polylines, points, nodes, and links). As per claim 10, Hyun discloses wherein the editing interface includes a graphic object editing tool in the first interface area, the graphic object editing tool being configured to receive the first user input, the second user input and the third user input (see at least paragraph 0040; wherein the indoor map creation unit (330) can perform the function of creating an indoor map of a specific building based on map creation command signals transmitted from the user information collection unit (320). That is, the indoor map creation unit (330) can create an indoor map by drawing major facilities existing indoors (e.g., offices, restrooms, stairs, etc.) on a map corresponding to a specific building using at least one of polygons, polylines, points, nodes, and links). As per claim 11, Hyun discloses wherein the allocating of the at least one graphic object includes: specifying the area graphic object based on at least one of the first user input, the second user input, or the third user input (see at least paragraph 0040; wherein the indoor map creation unit (330) can perform the function of creating an indoor map of a specific building based on map creation command signals transmitted from the user information collection unit (320). That is, the indoor map creation unit (330) can create an indoor map by drawing major facilities existing indoors (e.g., offices, restrooms, stairs, etc.) on a map corresponding to a specific building using at least one of polygons, polylines, points, nodes, and links); and receiving a selection of the one type of the area graphic object based on a fourth user input to the setting menu (see at least paragraph 0030; wherein a building menu (213) for adding/changing/deleting a building on the map, a floor menu (214) for adding/changing/deleting a floor in a pre-designated building, a polygon menu (216) for adding/changing/deleting a polygon on the map, a polyline menu (216) for adding/changing/deleting a polyline on the map, a point menu (217) for adding/changing/deleting a point on the map, a node menu (218) for adding/changing/deleting a node on the map, a link menu (219) for adding/changing/deleting a link on the map, etc). As per claim 12, Hyun discloses wherein the allocating of the at least one graphic object includes configuring a color of the area graphic object on the specific map to have a color matched to the one type of the area graphic object (see at least paragraph 0101; wherein the indoor map creation device (330) may display the color, size, or shape of the additional node (1330) differently from the color, size, or shape of the points (1340) that constitute the polygon (1310)). As per claim 13, Hum discloses wherein the updating causes the robots to travel along a traveling path formed by a plurality of travel nodes, each of the plurality of travel nodes corresponding to one among a plurality of travel node graphic objects allocated on the specific map; the allocating of the graphic object includes configuring a travel direction specification to define a traveling direction of the robots between at least a portion of the plurality of travel nodes; and the method further comprises receiving the travel direction specification through the editing interface, the receiving of the travel direction specification includes adding a connecting line that connects adjacent travel node graphic objects among the plurality of travel node graphic objects through the editing interface (see at least paragraph 0062; wherein a virtual area that the autonomous driving device (20) refers to for driving and operation, a basic entry prevention area, a driving caution area, information on the operation of the autonomous driving device (20), etc). As per claim 14, Hyun discloses a method of generating a map, comprising: receiving a map editing request for a specific floor among a plurality of floors of a building (see at least paragraph 0070; wherein the indoor map creation device (330) can call up a floor plan image of a building stored in a database (350) based on a plan registration request signal input to a user terminal (110, 500) (S110)); providing an editing interface on a display unit of an electronic device in response to the map editing request, the editing interface including at least a part of a specific map corresponding to the specific floor (see at least paragraph 0070; wherein the indoor map creation device (330) may also call up a floor plan image of a building stored in the memory (540) of the user terminal (110). The indoor map creation device (330) can display the above-mentioned called drawing image on the user interface screen of the user terminal (110)); allocating at least one node on the specific map based on editing information received from the electronic device (see at least paragraph 0040; wherein the indoor map creation unit (330) can create an indoor map by drawing major facilities existing indoors (e.g., offices, restrooms, stairs, etc.) on a map corresponding to a specific building using at least one of polygons, polylines, points, nodes, and links). Hyun does not explicitly mention updating the specific map on a cloud server based on completion of the allocating such that robots travel the specific floor along the at least one node allocated on the specific map, or perform an operation defined at the at least one node on the specific floor. However Hum does disclose: updating the specific map on a cloud server based on completion of the allocating such that robots travel the specific floor along the at least one node allocated on the specific map, or perform an operation defined at the at least one node on the specific floor (see at least paragraphs 0061-0062; wherein the attribute information setting unit (123) sets attribute information to the basic map information adjusted by the scale correction unit. For example, attribute information may include virtual walls within a building, deceleration sections of an autonomous driving device (20), and obstacle information within a building…see at least paragraph 0104; wherein if the update information is generated, transmits the update information to the control server (30) (S260)). Therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in Hum with the teachings as in Hyun. The motivation for doing so would have been to improve loop closure errors, see Hum paragraphs 0006-0009. As per claim 15, Hum discloses wherein the at least one node corresponds to at least one of: a first type of node that configures a traveling path for the robots, the first type of node corresponding to a travel node linked to the travel of the robots, a second type of node that corresponds to an operation node linked to a specific operation of the robots, or a third type of node that corresponds to a facility node linked to a facility on the specific floor; and the updating causes the robots to perform an operation defined at the at least one node based on the at least one node being allocated to a place where the robots are positioned (see at least paragraph 0062; wherein the attribute information may include information on a virtual area that the autonomous driving device (20) refers to for driving and operation, a basic entry prevention area, a driving caution area, information on the operation of the autonomous driving device (20), etc. For example, attribute information may include virtual walls within a building, deceleration sections of an autonomous driving device (20), and obstacle information within a building). As per claim 16, Hum discloses wherein the at least one node includes the operation node; the specific operation of the robots includes a waiting operation in which the robots stop traveling and wait when positioned at the operation node; and the allocating includes setting the operation node with direction information that defines a direction in which the robots positioned at the operation node face (see at least paragraph 0062; wherein the attribute information may include information on a virtual area that the autonomous driving device (20) refers to for driving and operation, a basic entry prevention area, a driving caution area, information on the operation of the autonomous driving device (20), etc. For example, attribute information may include virtual walls within a building, deceleration sections of an autonomous driving device (20), and obstacle information within a building). As per claim 17, Hyun discloses wherein the at least one node includes a plurality of nodes; the allocating includes grouping a subset of nodes in a same zone on the specific map, the subset of nodes being among the plurality of nodes; and the method further comprising providing identification information in an area of the editing interface in response to selection of one node among the subset of nodes through the editing interface, the identification information including at least one of identification information on the one node or identification information on a zone including the one node (see at least paragraph 0050; wherein the drawing image processing unit (333) can perform a function of changing the position, size, shape, etc. of the drawing image displayed on the user terminal (110) based on the drawing control signal of the terminal user. The drawing image displayed on the user terminal (110) can be changed in position, size, shape, etc. so that the outline of the drawing image and the outline of the building corresponding to the drawing image match each other according to the drawing control signal of the terminal user). As per claim 18, Hyun discloses a system for generating a map, comprising: a communication unit configured to receive a map editing request for a specific floor among a plurality of floors of a building (see at least paragraph 0070; wherein the indoor map creation device (330) can call up a floor plan image of a building stored in a database (350) based on a plan registration request signal input to a user terminal (110, 500) (S110)); and processing circuitry (see at least paragraph 0049; wherein drawing image processing unit (333)) configured to provide an editing interface on a display unit of an electronic device in response to the map editing request, the editing interface including at least a part of a specific map corresponding to the specific floor (see at least paragraph 0070; wherein the indoor map creation device (330) may also call up a floor plan image of a building stored in the memory (540) of the user terminal (110). The indoor map creation device (330) can display the above-mentioned called drawing image on the user interface screen of the user terminal (110)), allocate at least one graphic object on the specific map based on editing information received from the electronic device (see at least paragraph 0040; wherein the indoor map creation unit (330) can create an indoor map by drawing major facilities existing indoors (e.g., offices, restrooms, stairs, etc.) on a map corresponding to a specific building using at least one of polygons, polylines, points, nodes, and links). Hyun does not explicitly mention update the specific map on a cloud server based on completion of the allocation of the at least one graphic object such that robots travel through the specific floor according to an attribute of the at least one graphic object. However Hum does disclose: update the specific map on a cloud server based on completion of the allocation of the at least one graphic object such that robots travel through the specific floor according to an attribute of the at least one graphic object (see at least paragraphs 0061-0062; wherein the attribute information setting unit (123) sets attribute information to the basic map information adjusted by the scale correction unit. For example, attribute information may include virtual walls within a building, deceleration sections of an autonomous driving device (20), and obstacle information within a building…see at least paragraph 0104; wherein if the update information is generated, transmits the update information to the control server (30) (S260)). Therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in Hum with the teachings as in Hyun. The motivation for doing so would have been to improve loop closure errors, see Hum paragraphs 0006-0009. Relevant Art The prior art made of record and not relied upon are considered pertinent to applicant’s disclosure: USPGPub 2017/0031925 – Provides mapping of dynamic spaces, such as building floor plans. USPGPub 2015/0193416 - Provided to allow a user to insert annotations or comments in a map application and place such annotations or comments on a digital map, as desired by the user. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAHMOUD S ISMAIL whose telephone number is (571)272-1326. The examiner can normally be reached M - F: 8:00AM- 4:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jelani Smith can be reached at 571-270-3969. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MAHMOUD S ISMAIL/Primary Examiner, Art Unit 3662
Read full office action

Prosecution Timeline

Dec 13, 2024
Application Filed
Jan 29, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602045
Autonomous Operation Method, Work Vehicle, And Autonomous Operation System
2y 5m to grant Granted Apr 14, 2026
Patent 12602053
INFORMATION PROCESSING APPARATUS, MOVING BODY CONTROL SYSTEM, CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12603772
Vehicle Diagnostic System, Method, and Apparatus
2y 5m to grant Granted Apr 14, 2026
Patent 12601144
WORKING MACHINE
2y 5m to grant Granted Apr 14, 2026
Patent 12588671
METHOD FOR CALIBRATING AN AGRICULTURAL SPRAYER
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+11.5%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 778 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month