Prosecution Insights
Last updated: April 19, 2026
Application No. 18/546,029

MAP DISPLAY METHOD AND APPARATUS, MEDIUM, AND ELECTRONIC DEVICE

Non-Final OA §101§103
Filed
Aug 10, 2023
Examiner
YANOSKA, JOSEPH ANDERSON
Art Unit
3664
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
BEIJING ROBOROCK INNOVATION TECHNOLOGY CO., LTD.
OA Round
3 (Non-Final)
38%
Grant Probability
At Risk
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
10 granted / 26 resolved
-13.5% vs TC avg
Strong +60% interview lift
Without
With
+60.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
34 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
28.5%
-11.5% vs TC avg
§103
47.1%
+7.1% vs TC avg
§102
15.6%
-24.4% vs TC avg
§112
7.8%
-32.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 26 resolved cases

Office Action

§101 §103
Detailed Office Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is a non-final Office Action on the merits. Claims 1, 5-7, 9-10, 14-16 are currently pending and are addressed below. Priority Acknowledgment is made of applicant's claim priority for Chinese patent application No. 202110184745.8 and Chinese patent application No. 202110184843.1, both filed on February 10, 2021. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/07/2026 has been entered. Response to Amendment The amendment filed 01/07/2026 has been entered. Applicant has amended Claims 1 and 10. Applicant has canceled Claims 4 and 13. Claims 1, 5-7, 9-10, 14-16 remain pending in the application. Reply to Applicant’s Remarks Applicant’s remarks filed 01/07/2026 have been fully considered and are addressed as follows: Claim Rejections Under 35 U.S.C. 101: Applicant’s amendments to the claims filed 01/07/2026 have not overcome the 35 U.S.C 101 rejections previously set forth. Regarding the Applicant’s argument that the claim limitations “cannot practically be performed entirely in a human mind because it requires manipulation of digital map layers, coordinates, and rendering operations producing a modified visual data structure for display”, the Examiner respectfully finds the Applicant's arguments not persuasive because they are not commensurate with the scope of the claims. While the claims are written to require the limitations to be performed by User Equipment, the mere acts of generating a map, determining positional relationships, and demarcating a region on a map are still acts that be performed in the human mind. It is important to note that “Claims can recite a mental process even if they are claimed as being performed on a computer” (See at least MPEP 2106.04(a)(2)(III)(C)). Further, the claims do not specifically recite any manipulations of computer data structures and outputs that are modified computer data structures and the steps such as generating, determining, and demarcating are kept at a high level of generality such that they can reasonably be performed in the human mind with the help of pen and paper. Regarding the Applicant’s argument that “The Office's characterization of "acquiring" and "displaying" as extra-solution activities overlooks the core technical constraints: sub-region magnification by preset scale while preserving the outside region's size and drawing units tied to boundary coordinate data onto a magnified layer that overlays the base map. These steps are not mere data display; they are directed to a concrete rendering pipeline that changes data structures and produces a modified image (magnified sub-region plus unit overlays) for display”, the Examiner respectfully disagrees, as the limitations in question are recited at a high level of generality such that given their broadest reasonable interpretation, contain only the mere collection and display of data. Therefore, because the claims only recite mental processes and insignificant extra solution activities, there are no additional elements that can integrate the abstract idea into a practical application. Further, the claim cannot provide an improvement to the technology as an improved abstract idea is still an abstract idea. (see MPEP 2106.05(a) Section II, “However, it is important to keep in mind that an improvement in the abstract idea…is not an improvement in technology”). Please see detailed rejection below. Claim Rejections Under 35 U.S.C. 103: Applicant’s amendments and/or arguments with respect to the rejection of Claims 1 and 10 under 35 USC 103 as set forth in the office action of 10/07/2025 have been considered but are moot because the new ground(s) of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 5-7, 9-10, and 14-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims’ subject matter eligibility will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50-57 (January 7, 2019) (“2019 PEG”). 101 Analysis - With respect to Claim 1 Claims 1 and 10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. 101 Analysis - Step 1: Claim 1 is directed towards a method which is directed to the statutory category of a process. Claim 10 is directed towards an apparatus which is directed to the statutory category of a machine. Therefore Claims 1 and 10 are within at least one of the four statutory categories. 101 Analysis- Step 2A Prong One: Regarding Prong One of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental process. Independent claim 1 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. Claim 1 recites, inter alai: “A map display method, comprising: acquiring, by a user equipment in interaction with a cleaning apparatus, room map data and region map data, wherein the region map data comprises region boundary coordinate data; generating, by the user equipment, a room map based on the room map data, and a region map layer on the room map based on the region map data; and displaying, by the user equipment, the room map covered with the region map layer; wherein generating the region map layer on the room map based on the region map data comprises: determining a position relationship between the room map and the region map layer based on the room map data and the region map data; and demarcating a sub-region corresponding to the region map layer from the room map based on the position relationship, and generating the region map layer on the sub-region by magnifying the sub-region by a preset scale, keeping a size of another region outside the sub-region unchanged, and drawing point units or linear units corresponding to the region map data on the magnified sub-region.” The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “generating”, “determining”, and “demarcating” in the context of this claim, all encompass a person looking at available data and forming a simple judgement (determination, analysis, comparison, etc.) either manually or using a pen and paper. Accordingly, the claim recites at least one abstract idea. The examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same). As drafted, the above claims, under their broadest reasonable interpretation, cover mental processes performed in the human mind (including an observation, evaluation, judgement, opinion), that are merely completed via generic computer components. Accordingly, the claims recite an abstract idea. Step 2A Prong Two Analysis: Regarding Prong Two of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract idea into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application”. In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”): Claim 1 recites, inter alai: “A map display method, comprising: acquiring, by a user equipment in interaction with a cleaning apparatus, room map data and region map data, wherein the region map data comprises region boundary coordinate data; generating, by the user equipment, a room map based on the room map data, and a region map layer on the room map based on the region map data; and displaying, by the user equipment, the room map covered with the region map layer; wherein generating the region map layer on the room map based on the region map data comprises: determining a position relationship between the room map and the region map layer based on the room map data and the region map data; and demarcating a sub-region corresponding to the region map layer from the room map based on the position relationship, and generating the region map layer on the sub-region by magnifying the sub-region by a preset scale, keeping a size of another region outside the sub- region unchanged, and drawing point units or linear units corresponding to the region map data on the magnified sub-region.” For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. Regarding the additional limitation of “acquiring…” and “displaying…”, these limitations merely describe the sending, receiving, and display of data which are insignificant extra solution activities. See MPEP § 2106.05(g). Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Step 2B Analysis: The claims do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using generic computer components to perform the abstract idea amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Further, the act of collecting data and displaying data amounts to no more than merely storing and displaying information of the exception and thus is an extra-solution activity. The claims are not patent eligible. Regarding dependent claims 5-7, 9-10 and 14-16, no claim further adds a limitation that introduces any practical applications to the claimed invention, the dependent claims merely add more mental process, mathematical concepts, and post-solution activities and are thus not patent eligible. Therefore, Claims 1, 5-7, 9-10, and 14-16 are ineligible under 35 USC §101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 7, 9-10 and 16 are rejected under 35 U.S.C. 103 as being anticipated by Liu (CN 110200549 A English Translation) in view of Ng Dylan et al (WO 2021010899 A1) and Kawahara et al (JP 2003287424 A), hereafter referred to as Liu, Ng Dylan, and Kawahara. Regarding Claim 1, Liu teaches a map display method (see at least Liu [¶ 48]) comprising: acquiring, by a user equipment in interaction with a cleaning apparatus, room map data and region map data, wherein the region map data comprises region boundary coordinate data (see at least Liu [¶ 48, 71, 84, 107] the cleaning system includes a cleaning robot 101 and a terminal 102. The cleaning robot 101 collects environmental images through an image acquisition device. The cleaning robot 101 identifies a ground area from the environmental image…The cleaning robot 101 sends the display map of the ground area to the terminal, wherein the display map of the ground area can indicate the location of the at least one sub-area...The terminal 102 receives a target area input by a user and then sends the target area to the cleaning robot 101. The cleaning robot 101 cleans the target area…The environment image may be, for example, an image of any area indoors…after the sub-region is identified, the outline of the sub-region can be extracted, and based on the outline of the sub-region, the cleaning robot can determine the position of the sub-region, wherein the position of the sub-region includes, but is not limited to: the boundary of the sub-region…coordinates are set for each area, specifically: different areas have unique corresponding coordinates) generating, by the user equipment, a room map based on the room map data, and a region map layer on the room map based on the region map data (see at least Liu [¶ 88, 72-73] a possible method for determining a display map of a ground area is: determining the display map of the ground area through color data and/or texture data of at least one sub-area…The displayed map of the ground area includes color data and/or texture data of at least one sub-area…after the target map is generated, coordinates are set for each area, specifically: different areas have unique corresponding coordinates…the method for identifying the ground area from the environment image may be: segmenting the environment image by using a preset image segmentation model to obtain the ground area) and displaying, by the user equipment, the room map covered with the region map layer (see at least Liu [¶ 65, 87] the cleaning robot may further include an input and output unit, a position measurement unit, a wireless communication unit, a display unit…the displayed map of the ground area in the embodiment of the present application can indicate the location of at least one sub-area). However, Liu does not explicitly teach wherein generating the region map layer on the room map based on the region map data comprises: determining a position relationship between the room map and the region map layer based on the room map data and the region map data; and demarcating a sub-region corresponding to the region map layer from the room map based on the position relationship. Ng Dylan, in the same field as the endeavor, teaches determining a position relationship between the room map and the region map layer based on the room map data and the region map data (see at least Ng Dylan [¶ 13-14, 76] the one or more user defined parameters comprise one or more of the following: - one or more designated cleaning tasks, geographical information of one or more designated cleaning area, infrastructure or environmental information of the one or more designated cleaning area…The task planning module 230 may divide the whole cleaning area to multiple cleaning zones) Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Liu to contain a system for wherein generating the region map layer on the room map based on the region map data comprises: determining a position relationship between the room map and the region map layer based on the room map data and the region map data with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the navigation and mapping capabilities of the system by allowing for navigation of a larger area that is divided into smaller designated areas where knowing the location of the smaller area within the larger areas may be used when planning certain tasks such as cleaning. Further, Liu does not explicitly teach demarcating a sub-region corresponding to the region map layer from the room map based on the position relationship, and generating the region map layer on the sub-region by magnifying the sub-region by a preset scale, keeping a size of another region outside the sub- region unchanged, and drawing point units or linear units corresponding to the region map data on the magnified sub region. Kawahara, in the field of mapping and map information display, teaches demarcating a sub-region corresponding to the region map layer from the room map based on the position relationship (see at least Kawahara [English Translation pg.3 para.3] Based on the map information and the specific spot information read from the map information storage means, the section displays the map information in which the enlarged area icon is superimposed on the point corresponding to the specific spot information on the display and detects the current position detection means…processing unit that magnifies map information in the sub area) and generating the region map layer on the sub-region by magnifying the sub-region by a preset scale, keeping a size of another region outside the sub- region unchanged (see at least Kawahara [English Translation pg.3 para.3-6 and Fig. 9 and Fig. 12] When the displayed current position approaches the magnified area icon, a region centering on the point where the magnified area icon is superimposed is set as a sub area, and an icon processing unit that magnifies map information in the sub area is provided…Next, the map information display method of the present invention replaces the predetermined sub-area of the map information with the map information obtained by enlarging the map information in the sub-area and displays it on the display…the enlargement ratio of the map information in the sub area) and drawing point units or linear units corresponding to the region map data on the magnified sub region (see at least Kawahara [English Translation pg.3 para.7] specific spot information, which is information of a point suitable for partially enlarging and displaying the map information, is associated with each point of the map information, and the enlarged area is expanded to the point corresponding to the specific spot information…As a sub area, the map information in the sub area is enlarged) Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Liu to contain a system for demarcating a sub-region corresponding to the region map layer from the room map based on the position relationship, and generating the region map layer on the sub-region by magnifying the sub-region by a preset scale, keeping a size of another region outside the sub- region unchanged, and drawing point units or linear units corresponding to the region map data on the magnified sub region with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the user experience of the system’s displayed map by allowing the user to see one or more different zoomed-in areas of the map as to easily see and understand a specific area in more detail. Regarding Claim 10, Liu teaches an electronic apparatus, comprising: a processor (see at least Liu [¶ 22]) a memory, configured to store an instruction executable by the processor (see at least Liu [¶ 22]), wherein the processor through executing the executable instruction is configured to: acquire room map data and region map data, wherein the region map data comprises region boundary coordinate data (see at least Liu [¶ 48, 71, 84, 107] the cleaning system includes a cleaning robot 101 and a terminal 102. The cleaning robot 101 collects environmental images through an image acquisition device. The cleaning robot 101 identifies a ground area from the environmental image…The cleaning robot 101 sends the display map of the ground area to the terminal, wherein the display map of the ground area can indicate the location of the at least one sub-area...The terminal 102 receives a target area input by a user and then sends the target area to the cleaning robot 101. The cleaning robot 101 cleans the target area…The environment image may be, for example, an image of any area indoors…after the sub-region is identified, the outline of the sub-region can be extracted, and based on the outline of the sub-region, the cleaning robot can determine the position of the sub-region, wherein the position of the sub-region includes, but is not limited to: the boundary of the sub-region…coordinates are set for each area, specifically: different areas have unique corresponding coordinates) generate a room map based on the room map data, and generate a region map layer on the room map based on the region map data (see at least Liu [¶ 88, 72-73] a possible method for determining a display map of a ground area is: determining the display map of the ground area through color data and/or texture data of at least one sub-area…The displayed map of the ground area includes color data and/or texture data of at least one sub-area…after the target map is generated, coordinates are set for each area, specifically: different areas have unique corresponding coordinates…the method for identifying the ground area from the environment image may be: segmenting the environment image by using a preset image segmentation model to obtain the ground area) and display the room map covered with the region map layer (see at least Liu [¶ 65, 87] the cleaning robot may further include an input and output unit, a position measurement unit, a wireless communication unit, a display unit…the displayed map of the ground area in the embodiment of the present application can indicate the location of at least one sub-area). However, Liu does not explicitly teach wherein the processor is specifically configured to: determine a position relationship between the room map and the region map layer based on the room map data and the region map data Ng Dylan, in the same field as the endeavor, teaches wherein the processor is specifically configured to determine a position relationship between the room map and the region map layer based on the room map data and the region map data (see at least Ng Dylan [¶ 13-14, 76] the one or more user defined parameters comprise one or more of the following: - one or more designated cleaning tasks, geographical information of one or more designated cleaning area, infrastructure or environmental information of the one or more designated cleaning area…The task planning module 230 may divide the whole cleaning area to multiple cleaning zones) Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Liu to contain a system for wherein generating the region map layer on the room map based on the region map data comprises: determining a position relationship between the room map and the region map layer based on the room map data and the region map data with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the navigation and mapping capabilities of the system by allowing for navigation of a larger area that is divided into smaller designated areas where knowing the location of the smaller area within the larger areas may be used when planning certain tasks such as cleaning. Further, Liu does not explicitly teach a processor specifically configured to: demarcate a sub-region corresponding to the region map layer from the room map based on the position relationship, and generate the region map layer on the sub-region by magnifying the sub-region by a preset scale, keeping a size of another region outside the sub-region unchanged, and drawing point units or linear units corresponding to the region map data on the magnified sub region. Kawahara, in the field of mapping and map information display, teaches demarcating a sub-region corresponding to the region map layer from the room map based on the position relationship (see at least Kawahara [English Translation pg.3 para.3] Based on the map information and the specific spot information read from the map information storage means, the section displays the map information in which the enlarged area icon is superimposed on the point corresponding to the specific spot information on the display and detects the current position detection means…processing unit that magnifies map information in the sub area) and generating the region map layer on the sub-region by magnifying the sub-region by a preset scale, keeping a size of another region outside the sub- region unchanged (see at least Kawahara [English Translation pg.3 para.3-6, and Fig. 9 and Fig. 12] When the displayed current position approaches the magnified area icon, a region centering on the point where the magnified area icon is superimposed is set as a sub area, and an icon processing unit that magnifies map information in the sub area is provided…Next, the map information display method of the present invention replaces the predetermined sub-area of the map information with the map information obtained by enlarging the map information in the sub-area and displays it on the display…the enlargement ratio of the map information in the sub area)) and drawing point units or linear units corresponding to the region map data on the magnified sub region (see at least Kawahara [English Translation pg.3 para.7] specific spot information, which is information of a point suitable for partially enlarging and displaying the map information, is associated with each point of the map information, and the enlarged area is expanded to the point corresponding to the specific spot information…As a sub area, the map information in the sub area is enlarged) Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Liu to contain a system for demarcating a sub-region corresponding to the region map layer from the room map based on the position relationship, and generating the region map layer on the sub-region by magnifying the sub-region by a preset scale, keeping a size of another region outside the sub- region unchanged, and drawing point units or linear units corresponding to the region map data on the magnified sub region with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the user experience of the system’s displayed map by allowing the user to see one or more different zoomed-in areas of the map as to easily see and understand a specific area in more detail. Regarding Claim 7 and Claim 16, Liu in view of Ng Dylan and Kawahara teaches all limitations of the method of Claim 1 and the apparatus of Claim 10 as set forth above. Liu further teaches, wherein the room map data comprises room boundary coordinate data (see at least Liu [¶ 107, 84] after the target map is generated, coordinates are set for each area, specifically: different areas have unique corresponding coordinates….the cleaning robot can determine the position of the sub-region, wherein the position of the sub-region includes, but is not limited to: the boundary of the sub-region) and the generating the room map based on the room map data comprises: determining a shape and a size of a room map layer based on the room boundary coordinate data (see at least Liu [¶ 148] By displaying different colors for boundary areas and non-boundary areas, the size and shape of the sub-area can be more intuitively displayed to the user, thereby improving the display effect of the displayed map and providing the user with a more convenient way to select an area. Therefore, the user experience can be improved to a certain extent) and generating the room map by drawing the room boundary coordinate data on the room map layer (see at least Liu [¶ 96-97] the method for determining the display map according to the boundary line may be: combining at least one sub-area according to the boundary line to obtain the display map. When combining, the combination is performed in an overlapping manner, and the overlapping manner can be understood as that sub-areas with overlapping boundary lines are arranged adjacent to each other....the display map is determined by the boundary lines, and only the boundary lines need to be spliced and calculated). Regarding Claim 9, Liu in view of Ng Dylan and Kawahara teaches all limitations of the method of Claim 1 as set forth above. Liu further teaches a computer-readable storage medium, wherein the computer- readable storage medium stores a computer program, and the computer program is executed by a processor to implement the map display method according to Claim 1 (see at least Liu [¶ 23-24] the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program for electronic data exchange, wherein the computer program enables a computer to execute part or all of the steps described in the first and second aspects of the embodiments of the present application). Claims 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Liu (CN 110200549 A English Translation) in view of Ng Dylan et al (WO 2021010899 A1), Kawahara et al (JP 2003287424 A) and Jeong et al (US 20100211244 A1). Hereafter referred to as Liu, Ng Dylan, Kawahara and Jeong respectively. Regarding Claim 5 and Claim 14, Liu in view of Ng Dylan and Kawahara teaches all limitations of the method of Claim 1 and the apparatus of Claim 10 as set forth above. However, the combination of Liu and Ng Dylan does not explicitly teach wherein the method further comprises: restoring the magnified sub-region to an original size, and covering the room map with the region map layer. Jeong, in the same field as the endeavor, teaches wherein the method further comprises: restoring the magnified sub-region to an original size, and covering the room map with the region map layer (see at least Jeong [¶ 59] enlarged part 435 shows the straight path of the mobile object being potentially beyond the defined free area of enlarged part 435. Thus, in an embodiment, according to an embodiment, an approximate path generated in a reduced grid map is replanned in the original grid map, i.e., the corresponding portion of the approximate path, which will be mapped back into the original grid map for sectional path generation, may be redefined to only traverse through defined free areas). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Liu to contain a system for restoring the magnified sub-region to an original size, and covering the room map with the region map layer with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of employing techniques commonly used in the art of mapping. Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Liu (CN 110200549 A English Translation) in view of Ng Dylan et al (WO 2021010899 A1), Kawahara et al (JP 2003287424 A) and Huang et al (CN 111857127 A). Hereafter referred to as Liu, Ng Dylan, Kawahara and Huang respectively. Regarding Claim 6 and Claim 15, Liu in view of Ng Dylan and Kawahara teaches all limitations of the method of Claim 1 and the apparatus of Claim 10 as set forth above. However, the combination does not explicitly teach wherein an area of the sub-region is greater than an area of the region map layer. Huang, in the same field as the endeavor, teaches wherein an area of the sub-region is greater than an area of the region map layer (see at least Huang [¶ 7, FIG.3] the initial room cleaning partitions of the robot are divided in real time in the pre-defined cleaning area according to the map image pixel information obtained by laser scanning during the edge-running process. In the same pre-defined cleaning area, the initial room cleaning partitions of the robot are expanded by repeatedly iteratively processing the wall boundaries of the uncleaned areas, thereby ensuring that the contour boundaries of the preset room cleaning partitions finally formed in the same pre-defined cleaning area are similar to the wall boundaries of indoor home rooms… FIG3 is a diagram showing the effect of framing a predefined cleaning area P2). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Liu to contain a system for wherein an area of the sub-region is greater than an area of the region map layer with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the efficiency of the mapping and cleaning of the room as discussed in Huang (see at least Huang [¶ 7] thereby improving the efficiency of the robot's navigation along the boundaries of the preset room cleaning partitions and effectively preventing the robot from repeatedly cleaning within the preset room cleaning partitions). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH A YANOSKA whose telephone number is (703)756-5891. The examiner can normally be reached M-F 9:00am to 5:00pm (Pacific Time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rachid Bendidi can be reached on (571) 272-4896. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSEPH ANDERSON YANOSKA/Examiner, Art Unit 3664 /RACHID BENDIDI/ Supervisory Patent Examiner, Art Unit 3664
Read full office action

Prosecution Timeline

Aug 10, 2023
Application Filed
Apr 01, 2025
Non-Final Rejection — §101, §103
Jul 07, 2025
Response Filed
Oct 01, 2025
Final Rejection — §101, §103
Dec 05, 2025
Response after Non-Final Action
Jan 07, 2026
Request for Continued Examination
Feb 12, 2026
Response after Non-Final Action
Mar 04, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600502
NEURAL NETWORK-GUIDED PASSIVE SENSOR DRONE INSPECTION SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12548454
CONTROLLING DRONE NOISE BASED UPON HEIGHT
2y 5m to grant Granted Feb 10, 2026
Patent 12530031
VIRTUAL OFF-ROADING GUIDE
2y 5m to grant Granted Jan 20, 2026
Patent 12447969
LIMITED USE DRIVING OPERATIONS FOR VEHICLES
2y 5m to grant Granted Oct 21, 2025
Patent 12366859
TROLLING MOTOR AND SONAR DEVICE DIRECTIONAL CONTROL
2y 5m to grant Granted Jul 22, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
38%
Grant Probability
99%
With Interview (+60.1%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 26 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month