Prosecution Insights
Last updated: April 19, 2026
Application No. 18/472,877

AUTONOMOUS DRIVING DEVICE

Non-Final OA §103§112
Filed
Sep 22, 2023
Examiner
SHARMA, SHIVAM
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Semes Co. Ltd.
OA Round
3 (Non-Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
3y 1m
To Grant
43%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
15 granted / 34 resolved
-7.9% vs TC avg
Minimal -1% lift
Without
With
+-1.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
49 currently pending
Career history
83
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
44.8%
+4.8% vs TC avg
§102
19.4%
-20.6% vs TC avg
§112
24.0%
-16.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 34 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This action is in reply to the application filed on 12/17/2025 for Application No. 18/472,877 Claims 1, 5, 6 and 8 – 19 are currently pending and have been examined. Claims 1 – 3, 7, 9, 12 – 14 and 16 have been amended. Claims 2 – 4, 7 and 20 have been cancelled This action is made NON-FINAL Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/17/2025 has been entered. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “map generating module” in claims 1, 5, 6, 13, 15, 17 and 18. “measurement module” in claims 1 and 13. “obstacle sensing module” in claim 16. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1, 5, 6, 8 – 19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 states: “wherein the measurement module is configured to measure… a sheathing condition and a grounding condition of a cable mounted on the traveling rail” and Claim 13 states: “the measurement module is further configured to generate 3-dimensional modeling information including information about … a sheathing condition and a grounding condition of a cable mounted on the traveling rail”. In both claims, it is indefinite on how a measurement module is able to measure or generate 3-dimensional modeling information for these conditions. For example, what is being measured in a sheathing condition? Furthermore it is indefinite on what a grounding condition is? For example, how is the cable performing a grounding condition and how is that being measured/generated? Due to this indefiniteness, the sheathing condition and grounding condition will be broadly interpreted as a cable which is mounted on the traveling rail. Claims 1 and 13 state: “wherein the measurement module is further configured to generate 3-dimensional modeling information based on a 2-dimensional map and transmit the 3- dimensional modeling information to the map generating module”, however it is indefinite on how a 2-dimensional map is able to generate 3-dimensional modeling information. The modeling information includes the shape and position of the manufacturing equipment as stated in the specification (Specification: Paragraph 0007), however it is indefinite how height data is extracted from the 2-dimensional map. For example, if a certain equipment is placed on a higher platform or hanging from the wall, how can the 3-dimensional modeling information include the correct position of the equipment? Furthermore, the shape data for 3-dimensional modeling information is also indefinite as it cannot capture the entire shape of the machine. For example, a top view map may represent a funnel as a large rectangle whereas a ground-view perspective can show the funnel shrinking down the lower it goes, thus indefinite how the shape data for 3-dimensional modeling is determined by using only a top view map or only the ground view perspective. The measurement module is taught to have a 3-dimensional sensor, as stated in specification (Specification: Paragraph 0029), however the claims do not recite which 3-dimensional information is used to generate 3-dimensional modeling information. Claims 5, 6, and 8 – 12 are rejected as being dependent upon independent claim 1. Claims 14 – 19 are rejected as being dependent upon independent claim 13. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 5, 6, 8 and 11 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over Baalke et al. (US 11232391 B1), Al-Yousef et al. (US 11341830 B2), Daisuke et al. (JP6412974B1) and further in view of Lee et al. (US 20220126699 A1). Regarding claim 1, Baalke teaches an autonomous driving device comprising: (Baalke: Abstract: “Customized navigation maps of an area are generated for autonomous vehicles based on a baseline map of the area, transportation systems within the area, and attributes of the autonomous vehicles.”) … a measurement module mounted on each of the plurality of vehicles and including a LiDAR sensor and a 3-dimensional sensor; and (Baalke: Col. 29, lines 10 – 46: “In some embodiments, the autonomous vehicles 550-1, 550-2, 550-3 may include one or more omnidirectional or 360° LIDAR sensors or other range sensors arranged on the top of the autonomous vehicles 550-1, 550-2, 550-3 or in other locations or positions, such as at either end of the autonomous vehicles 550-1, 550-2, 550-3. For example, each of such sensors may generate one or more three-dimensional distance maps, e.g., in the form of a 3D point cloud representing distances at nominal ranges (e.g., between one meter and fifty meters) from the LIDAR sensor and an external surface within the field of view of the LIDAR sensor by rotating the LIDAR sensor (i.e., once per scan cycle). One or more controllers within the autonomous vehicles 550-1, 550-2, 550-3 may fuse data streams from any of such sensors, e.g., LIDAR sensor(s), color camera(s), and/or proximity sensor(s), into a single real-time 3D color map of surfaces of objects (e.g., roads, sidewalks, road vehicles, pedestrians, or the like) around the autonomous vehicles 550-1, 550-2, 550-3 per scan cycle, and process the 3D color map into a crossing confidence score or other navigational decision during operations.”) a map generating module configured to receive information from the measurement module, wherein the map generating module is further configured to generate a 3- dimensional map (Baalke: Col. 3, lines 5 – 9: “The customized navigation maps that are generated for travel by autonomous vehicles within the area may thus be subject to constant revision as new or updated information or data regarding the area is identified.”; Col. 5, lines 5 – 17: “For example, some or all of the set of data 122 regarding the outdoor transportation infrastructure may be determined from information or data previously captured during travel within the area or environment, e.g., based at least in part on one or more time stamps, net speeds, courses, angles of orientation, levels of traffic congestion, sizes or dimensions of any payloads carried, operational or environmental conditions, or any other information or data, captured or otherwise determined by autonomous vehicles or by one or more persons or machines. Moreover, some or all of the set of data regarding the outdoor transportation infrastructure may be updated in real time or in near-real time, on a synchronous or asynchronous basis.”; Col. 7, lines 31 – 37: “The server 192 may utilize any information or data received from the autonomous vehicle 150, or from any other autonomous vehicles or other systems or machines (not shown), to update one or more of the baseline map 105, the sets of data 120, 122, 124, or any customized navigation maps generated based thereon, in accordance with the present disclosure.”, Supplemental Note: the sensors of the autonomous vehicle are able to be used to create a 3D map of the environment. The 3D mapping of the autonomous vehicle can be used to update the baseline/customized navigation maps) … wherein the map generating module is configured to reflect the information about obstacles into the 3-dimensional map, (Baalke: Col. 29, lines 10 – 46: “In some embodiments, the autonomous vehicles 550-1, 550-2, 550-3 may include one or more omnidirectional or 360° LIDAR sensors or other range sensors arranged on the top of the autonomous vehicles 550-1, 550-2, 550-3 or in other locations or positions, such as at either end of the autonomous vehicles 550-1, 550-2, 550-3. For example, each of such sensors may generate one or more three-dimensional distance maps, e.g., in the form of a 3D point cloud representing distances at nominal ranges (e.g., between one meter and fifty meters) from the LIDAR sensor and an external surface within the field of view of the LIDAR sensor by rotating the LIDAR sensor (i.e., once per scan cycle). One or more controllers within the autonomous vehicles 550-1, 550-2, 550-3 may fuse data streams from any of such sensors, e.g., LIDAR sensor(s), color camera(s), and/or proximity sensor(s), into a single real-time 3D color map of surfaces of objects (e.g., roads, sidewalks, road vehicles, pedestrians, or the like) around the autonomous vehicles 550-1, 550-2, 550-3 per scan cycle, and process the 3D color map into a crossing confidence score or other navigational decision during operations.”; Col. 34, line 55 – Col. 35, line 2: “The server 592 may continuously receive information or data regarding the area or environment (e.g., the infrastructure or features therein) from any source, and may update the customized navigation maps 525-1, 525-2, 525-3 generated for the autonomous vehicles 550-1, 550-2, 550-3. In some embodiments, information or data may be received from operating autonomous vehicles 550-4, 550-5, 550-6, 550-7, 550-8 within the area or environment, regarding their past, present or future operations, e.g., their respective speeds, courses, positions (e.g., latitudes and longitudes), elevations or angles of orientation (e.g., yaws, pitches or rolls), as well as operational or environmental conditions such as surface conditions, traffic conditions, congestion or any other relevant factors encountered by the autonomous vehicles to the server 592 or other networked computer systems.”, Supplemental Note: the customized navigation maps are able to be updated by the 3d sensors of the autonomous vehicles, therefore interpreted to be updated 3d maps. The vehicle is also able to detect obstacles such as traffic conditions and congestions along with other relevant factors encountered by the vehicle) … wherein the map generating module is further configured to generate the 3- dimensional map, based on the 2-dimensional map and the 3-dimensional modeling information, (Baalke: Col. 29, lines 10 – 46: “In some embodiments, the autonomous vehicles 550-1, 550-2, 550-3 may include one or more omnidirectional or 360° LIDAR sensors or other range sensors arranged on the top of the autonomous vehicles 550-1, 550-2, 550-3 or in other locations or positions, such as at either end of the autonomous vehicles 550-1, 550-2, 550-3. For example, each of such sensors may generate one or more three-dimensional distance maps, e.g., in the form of a 3D point cloud representing distances at nominal ranges (e.g., between one meter and fifty meters) from the LIDAR sensor and an external surface within the field of view of the LIDAR sensor by rotating the LIDAR sensor (i.e., once per scan cycle). One or more controllers within the autonomous vehicles 550-1, 550-2, 550-3 may fuse data streams from any of such sensors, e.g., LIDAR sensor(s), color camera(s), and/or proximity sensor(s), into a single real-time 3D color map of surfaces of objects (e.g., roads, sidewalks, road vehicles, pedestrians, or the like) around the autonomous vehicles 550-1, 550-2, 550-3 per scan cycle, and process the 3D color map into a crossing confidence score or other navigational decision during operations.”; Col. 34, line 55 – Col. 35, line 2: “The server 592 may continuously receive information or data regarding the area or environment (e.g., the infrastructure or features therein) from any source, and may update the customized navigation maps 525-1, 525-2, 525-3 generated for the autonomous vehicles 550-1, 550-2, 550-3. In some embodiments, information or data may be received from operating autonomous vehicles 550-4, 550-5, 550-6, 550-7, 550-8 within the area or environment, regarding their past, present or future operations, e.g., their respective speeds, courses, positions (e.g., latitudes and longitudes), elevations or angles of orientation (e.g., yaws, pitches or rolls), as well as operational or environmental conditions such as surface conditions, traffic conditions, congestion or any other relevant factors encountered by the autonomous vehicles to the server 592 or other networked computer systems.”, Supplemental Note: the customized navigation maps are able to be updated by the 3d sensors of the autonomous vehicles, therefore interpreted to be updated 3d maps) wherein the map generating module is further configured to generate the 3- dimensional map including a map in which the 3-dimensional modeling information missing from the 2-dimensional map is restored, by comparing the 3-dimensional modeling information with the 2-dimensional map. (Baalke: Col. 29, lines 10 – 46: “In some embodiments, the autonomous vehicles 550-1, 550-2, 550-3 may include one or more omnidirectional or 360° LIDAR sensors or other range sensors arranged on the top of the autonomous vehicles 550-1, 550-2, 550-3 or in other locations or positions, such as at either end of the autonomous vehicles 550-1, 550-2, 550-3. For example, each of such sensors may generate one or more three-dimensional distance maps, e.g., in the form of a 3D point cloud representing distances at nominal ranges (e.g., between one meter and fifty meters) from the LIDAR sensor and an external surface within the field of view of the LIDAR sensor by rotating the LIDAR sensor (i.e., once per scan cycle). One or more controllers within the autonomous vehicles 550-1, 550-2, 550-3 may fuse data streams from any of such sensors, e.g., LIDAR sensor(s), color camera(s), and/or proximity sensor(s), into a single real-time 3D color map of surfaces of objects (e.g., roads, sidewalks, road vehicles, pedestrians, or the like) around the autonomous vehicles 550-1, 550-2, 550-3 per scan cycle, and process the 3D color map into a crossing confidence score or other navigational decision during operations.”; Col. 34, line 55 – Col. 35, line 2: “The server 592 may continuously receive information or data regarding the area or environment (e.g., the infrastructure or features therein) from any source, and may update the customized navigation maps 525-1, 525-2, 525-3 generated for the autonomous vehicles 550-1, 550-2, 550-3. In some embodiments, information or data may be received from operating autonomous vehicles 550-4, 550-5, 550-6, 550-7, 550-8 within the area or environment, regarding their past, present or future operations, e.g., their respective speeds, courses, positions (e.g., latitudes and longitudes), elevations or angles of orientation (e.g., yaws, pitches or rolls), as well as operational or environmental conditions such as surface conditions, traffic conditions, congestion or any other relevant factors encountered by the autonomous vehicles to the server 592 or other networked computer systems.”; Col. 4, line 55 – Col. 5, line 17: “The set of data 122 may thus include one or more layers or other forms of data defining a plurality of paths extending over or along the outdoor transportation infrastructure within the area or environment, and may have been identified or determined from any source, e.g., from one or more networked data stores, or the like, including but not limited to the same sources as the baseline map 105 shown in FIG. 1A. For example, some or all of the set of data 122 regarding the outdoor transportation infrastructure may be determined from information or data previously captured during travel within the area or environment, e.g., based at least in part on one or more time stamps, net speeds, courses, angles of orientation, levels of traffic congestion, sizes or dimensions of any payloads carried, operational or environmental conditions, or any other information or data, captured or otherwise determined by autonomous vehicles or by one or more persons or machines. Moreover, some or all of the set of data regarding the outdoor transportation infrastructure may be updated in real time or in near-real time, on a synchronous or asynchronous basis.”, Supplemental Note: like the customized maps, the (2d) baseline maps can also be updated from the 3d information captured by the autonomous vehicle) In sum, Baalke teaches an autonomous driving device comprising: a measurement module mounted on each of the plurality of vehicles and including a LiDAR sensor and a 3-dimensional sensor; and a map generating module configured to receive information from the measurement module, wherein the map generating module is further configured to generate a 3- dimensional map wherein the map generating module is configured to reflect the information about obstacles into the 3-dimensional map, wherein the map generating module is further configured to generate the 3- dimensional map, based on the 2-dimensional map and the 3-dimensional modeling information, wherein the map generating module is further configured to generate the 3- dimensional map including a map in which the 3-dimensional modeling information missing from the 2-dimensional map is restored, by comparing the 3-dimensional modeling information with the 2-dimensional map. Baalke however does not teach a plurality of vehicles configured to travel on a ceiling of a manufacturing line in which manufacturing equipment is arranged; a traveling rail arranged along the ceiling of the manufacturing line to provide a movement path for each of the plurality of vehicles; a sheathing condition and a grounding condition of a cable mounted on the traveling rail, and obstacles located on the traveling rail when each of the plurality of vehicles is traveling whereas Lee does. Lee teaches a plurality of vehicles configured to travel on a ceiling of a manufacturing line in which manufacturing equipment is arranged; a traveling rail arranged along the ceiling of the manufacturing line to provide a movement path for each of the plurality of vehicles; (Lee: Paragraph 0042: “The transport vehicle 20 travels along a rail 10 installed on a ceiling and has a wireless interface to communicate with a high-level server (a vehicle control apparatus) providing a transporting operation command.”; Paragraph 0043: “The rail 10 and a plurality of transport vehicles 20 may be provided. The rail 10 forms a transporting path (for example, a ceiling rail) for transporting goods from one manufacturing equipment item 1 to another. The plurality of transport vehicles 20 transport the goods to one manufacturing equipment item 1 to another while traveling along the rail 10.”) … a sheathing condition and a grounding condition of a cable mounted on the traveling rail, (Lee: Paragraph 0043: “At this point, the transport vehicle 20 transporting the goods may receive its motive power through an electricity supply unit (for example, a power supply cable) provided along the rail 10.”, Supplemental Note: please refer to section Claim Rejections - 35 USC § 112 for the indefiniteness rejection of this claim limitation) and obstacles located on the traveling rail when each of the plurality of vehicles is traveling, (Lee: Paragraph 0042: “According to a command of an integrated control system, the vehicle control apparatus searches for the shortest path from a starting point to a destination to finish a transporting operation in the least amount of time and selects the transport vehicle 20 positioned at an optimal position for performing a transporting operation. Then, the vehicle control apparatus provides a transporting command to the selected transport vehicle 20.”; Paragraph 0043: “With reference to FIG. 1, the manufacturing equipment items 1 for performing processes are installed in the semiconductor or display manufacturing line. The rail 10 and a plurality of transport vehicles 20 may be provided. The rail 10 forms a transporting path (for example, a ceiling rail) for transporting goods from one manufacturing equipment item 1 to another. The plurality of transport vehicles 20 transport the goods to one manufacturing equipment item 1 to another while traveling along the rail 10. At this point, the transport vehicle 20 transporting the goods may receive its motive power through an electricity supply unit (for example, a power supply cable) provided along the rail 10.”, Supplemental Note: multiple vehicles can be utilized in this system, thus the locations of multiple vehicles are known when determining a shortest path for a vehicle to travel to a destination) Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Baalke with the teachings of Lee with a reasonable expectation of success. The autonomous robot as taught by Baalke is able to traverse through both indoor and outdoor environments. When traversing through indoor facilities, such as a manufacturing facility, to avoid any unnecessary contact with the equipment, the ability to implement Lee’s teaching of having a vehicle able to travel on a ceiling rail system improves the ability of the autonomous robot as it travels inside. For example, the autonomous robots working solely inside of these manufacturing facilities would be able to better map the facility as they are placed higher and have a wider range of view than a robot traveling on the ground. This allows the robot to more efficiently map out the facility while also being out of the way from any personnel on the ground or any other machines. Due to these reasons, one with knowledge in the art would find this combination as applying a known technique, such as traveling on rails, to a known device, such as an autonomous robot, ready for improvement to yield predictable results per MPEP 2141. Furthermore Lee teaches the ability of the vehicle to be powered by a power supply cable whereas Baalke teaches the robot to be powered by a module (Baalke: Col. 17, lines 25 – 44). One with knowledge in the art would find the battery powered supply as taught by Baalke and the connected power supply on the rails as taught by Lee to be a simple substitution of one known element for another to obtain predictable results. For example, both Baalke and Lee are using these modules to power their respective vehicles, thus both vehicles are being powered by a module to perform their tasks. Baalke in view of Lee however still do not teach that 3-dimensionally represents the manufacturing line, wherein the measurement module is configured to measure a structure of the manufacturing line, a position of the manufacturing equipment on a floor of the manufacturing line, the 2-dimensional map representing the manufacturing line as a 2-dimensional grid structure whereas Al-Yousef does. Al- Yousef teaches that 3-dimensionally represents the manufacturing line, (Al-Yousef: Abstract: “The present disclosure describes a computer-implemented method to manage an industrial plant facility”; Col. 5, lines 46 – 55: “The real time visualization progression (RTVP) 121B can display real-time build out activities program. For example, the RTVP 121B may superimpose build out 3D image with the real-time progress feed. The RTVP 121B may provide dashboard and reporting capabilities on both construction progress and safety behavior metrics. The RTVP 121B may have the capability to detect schedule and geometric mismatch between the real-time captured 3D module and the 3D planned design.”) wherein the measurement module is configured to measure a structure of the manufacturing line, a position of the manufacturing equipment on a floor of the manufacturing line, (Al-Yousef: Col. 1, lines 45 – 58: “Monitoring multiple streams of input data from a network of sensors may include: accessing streams of data from at least one of: an aerial scanning at the industrial plant facility, a mobile scanning at the industrial plant facility, and a floor scanning at the industrial plant facility, wherein the aerial scanning comprises operating at least one surveillance drone to monitor the industrial plant facility, wherein the mobile scanning comprises: operating at least one moveable sensor to monitor the industrial plant facility, wherein the floor scanning comprises: operating at least one fixed sensor to monitor the industrial plant facility, and wherein at least one of the one or more surveillance drones, the one or more moveable sensors, or the one or more fixed sensors comprise: the one or more camera devices.”; Col. 5, line 61 – Col. 6, line 13: “The safety and quality monitoring system (SQMS) 122B can be embodied as a construction project quality monitoring system (CPQMS) and a construction project safety monitoring system (CPSMS). In case of the CPQMS, SQMS 122B can project construction progress visualization through the creation of 3D models from videos and imagery taken either by ground CCTV systems or aerial photogrammetry such as from drones. The data from the 3D models are correlated with asset construction progress from resource management. This can include 2D engineering tools, materials management, project controls, scheduling systems, and video and analytics systems. The SQMS 122B can provide 3D scanning to capture the construction status and verify it against the design basis in the 3D model to ensure that future construction and operation will proceed smoothly and identify any quality issues as early as possible. The SQMS 122B can provide true 3D model from a circular aperture or multiple single aperture, with high definition resolution (mm to km), provides active and passive 3D modeling, and allows identification and tagging for industrial equipment.”; Col. 10, lines 3 – 5: “If the determination is that the incident is in a hazardous area, the process may proceed to acknowledge the alarm and second a position tag to a safety coordinator (325A)”, Supplemental Note: the 3d model can identify and tag the industrial equipment by floor scanning. Tagging can be a position tag, thus able to locate the industrial equipment that are located on the ground floor) … the 2-dimensional map representing the manufacturing line as a 2-dimensional grid structure, and (Al-Yousef: Col. 10, lines 17 – 25: “In some implementations, the plant floor is divided into a number of cells or areas of coverage to facilitate adequate wireless connectivity and reliable integration to the process control systems. The areas are defined as part of a matrix with rows and columns so that each cell is addressable by its row and column identifier. As illustrated in FIG. 4A, an example of a plant floor 400 is divided 16 sixteen cells, in rows 1 to 4 and columns a to d.”, Supplemental Note: the plant floor divided into rows and columns is interpreted as a 2d grid structural as seen in Figure A) PNG media_image1.png 525 502 media_image1.png Greyscale Figure A - Al-Yousef: Fig. 4A Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Baalke with the teachings of Al-Yousef with a reasonable expectation of success. Regarding creating a 3d visualization, Baalke states (Baalke: Col. 5, line 52 – Col. 6, line 5) , thus teaching that the inside of a facility can also be mapped and identified by the autonomous vehicle. Al-Yousef teaches the ability of various cameras around a manufacturing facility able to build out a 3d visualization of the environment. To one with knowledge in the art, this is a simple substitution of the mapping abilities as both prior art cite the ability to detect their environment and 3d map it. Al-Yousef further specifies this implementation to be used for an industrial plant facility whereas Baalke teaches this implementation for both indoor and outdoor environments. This can be furthered with the ability of Al-Yousef’s system to identify and tag various equipment around the facility. For example, the autonomous robot of Baalke in combination with this ability will be able to travel through an industrial plant facility and able to identify/tag equipment. Baalke already teaches the ability to create a 3d point cloud (Baalke: Col. 29, lines 10 – 46), thus this combination only improves the autonomous robot’s ability to properly identify equipment and tag their position instead of 3d point cloud which is unable to capture this information. Due to these reasons, one with knowledge in the art would find the combination of these prior arts to be obvious to try. Furthermore, both Baalke and Al-Yousef teach the ability to be able to receive and identify 2d map features with the real world. Al-Yousef is using this map as a representation of an industrial plant facility whereas Baalke for both indoor and outdoor environments that is stored on a server. Therefore, one with knowledge in the art would find both of these limitations as a simple substitution. Baalke in view of Al-Yousef still do not teach wherein the measurement module is further configured to generate 3- dimensional modeling information based on a 2-dimensional map and transmit the 3-dimensional modeling information to the map generating module, the 3- dimensional modeling information includes information about the structure of the manufacturing line and a shape and the position of the manufacturing equipment whereas Daisuke does. Daisuke teaches wherein the measurement module is further configured to generate 3- dimensional modeling information based on a 2-dimensional map and transmit the 3-dimensional modeling information to the map generating module, (Daisuke: Paragraph 0007: “ A first aspect of the present invention provides an information management apparatus comprising: receiving means for receiving images and photographing information in the equipment floor photographed by the robot from a self-propelled robot in the equipment floor; three-dimensional map data A storage unit that stores the image, a storage unit that is attached to a device installed on the equipment floor, detects a code indicating information specifying the device from the image, and acquires information identifying the device from the code Dimensional map data on the basis of the photographing information, information specifying the device indicated by the code and position information on the three-dimensional map data Mapping means for associating an alert indicating an abnormality of a device installed on the equipment floor, There characterized by having a display means for displaying the image of the photographed the equipment in the floor.”; Paragraph 0015: “The camera 11 photographs the surroundings of the self-propelled robot 1. The image photographed by the camera 11 and the photographing information such as the photographing direction are transmitted from the transmitter 14 to the server 3.”, Supplemental Note: the vehicle is able to capture images, interpreted as 2-dimensional map, and gather information for three-dimensional map data) … the 3- dimensional modeling information includes information about the structure of the manufacturing line and a shape and the position of the manufacturing equipment, (Daisuke: Paragraph 0033: “ When the receiving unit 34 receives the alert (step S21), the display unit 35 acquires the information on the device included in the alert, specifies the position on the 3D map of the device (step S22), the display unit 35 displays the device A map of the equipment floor showing the position of the equipment, or information indicating the position of the equipment (step S23). The display unit 35 may display the image of the device which caused the alert.”; Paragraph 0019: “Based on the estimated self-position, the drive control unit 15 controls the motor or the like so that the self-propelled robot 1 moves along the route. When an obstacle is detected in the movement direction by the measurement sensor 12, an alert is notified. An obstacle may be one which is not permanently installed on a facility floor such as a corrugated cardboard box or a stepladder that has been forgotten after working on the equipment floor, for example.”, Supplemental Note: the robot is able to determine any alerts of a structure and their position/shape from the image) Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Baalke with the teachings of Daisuke with a reasonable expectation of success. Both Baalke and Daisuke teach self-driving robots able to traverse a facility and gather data about its environment. Daisuke teaches the ability of the robot to gather images that include 3D data utilized for detecting information about an equipment such as position and any alerts. Baalke utilizes a similar method where its robot is able to travel based on maps and sensors that gather 3D data. One with knowledge in the art would find that utilizing the robot navigation function of Daisuke with the robot of Baalke as combining prior art elements according to known methods to yield predictable results. For example, both are analyzing the environment based on their images and 3D point data from their other sensors. Both are also able to capture sizes and positions of different objects as they are scanned by the robots, thus both are able to utilize the known method of gathering image and sensor data to yield a predictable result of mapping out environments for traveling. Regarding claim 5, Baalke, as modified, teaches wherein the map generating module is further configured to transmit the 3-dimensional map to the plurality of vehicles. (Baalke: Abstract: “Customized navigation maps of an area are generated for autonomous vehicles based on a baseline map of the area, transportation systems within the area, and attributes of the autonomous vehicles. The customized navigation maps include a plurality of paths, and two or more of the paths may form an optimal route for performing a task by an autonomous vehicle. Customized navigation maps may be generated for outdoor spaces or indoor spaces, and include specific infrastructure or features on which a specific autonomous vehicle may be configured for travel. Routes may be determined based on access points at destinations such as buildings, and the access points may be manually selected by a user or automatically selected on any basis. The autonomous vehicles may be guided by GPS systems when traveling outdoors, and by imaging devices or other systems when traveling indoors.”, Supplemental Note: the customized navigation maps are sent to all of the vehicles of the system which per claim 4 which this claim is dependent upon, cites the autonomous vehicles able to create a 3d map to update the customized navigation maps) Regarding claim 6, Baalke, as modified, teaches wherein the map generating module is further configured to train the 2-dimensional map, based on the 3-dimensional map. (Baalke: Col. 29, lines 10 – 46: “In some embodiments, the autonomous vehicles 550-1, 550-2, 550-3 may include one or more omnidirectional or 360° LIDAR sensors or other range sensors arranged on the top of the autonomous vehicles 550-1, 550-2, 550-3 or in other locations or positions, such as at either end of the autonomous vehicles 550-1, 550-2, 550-3. For example, each of such sensors may generate one or more three-dimensional distance maps, e.g., in the form of a 3D point cloud representing distances at nominal ranges (e.g., between one meter and fifty meters) from the LIDAR sensor and an external surface within the field of view of the LIDAR sensor by rotating the LIDAR sensor (i.e., once per scan cycle). One or more controllers within the autonomous vehicles 550-1, 550-2, 550-3 may fuse data streams from any of such sensors, e.g., LIDAR sensor(s), color camera(s), and/or proximity sensor(s), into a single real-time 3D color map of surfaces of objects (e.g., roads, sidewalks, road vehicles, pedestrians, or the like) around the autonomous vehicles 550-1, 550-2, 550-3 per scan cycle, and process the 3D color map into a crossing confidence score or other navigational decision during operations.”; Col. 34, line 55 – Col. 35, line 2: “The server 592 may continuously receive information or data regarding the area or environment (e.g., the infrastructure or features therein) from any source, and may update the customized navigation maps 525-1, 525-2, 525-3 generated for the autonomous vehicles 550-1, 550-2, 550-3. In some embodiments, information or data may be received from operating autonomous vehicles 550-4, 550-5, 550-6, 550-7, 550-8 within the area or environment, regarding their past, present or future operations, e.g., their respective speeds, courses, positions (e.g., latitudes and longitudes), elevations or angles of orientation (e.g., yaws, pitches or rolls), as well as operational or environmental conditions such as surface conditions, traffic conditions, congestion or any other relevant factors encountered by the autonomous vehicles to the server 592 or other networked computer systems.”; Col. 4, line 55 – Col. 5, line 17: “The set of data 122 may thus include one or more layers or other forms of data defining a plurality of paths extending over or along the outdoor transportation infrastructure within the area or environment, and may have been identified or determined from any source, e.g., from one or more networked data stores, or the like, including but not limited to the same sources as the baseline map 105 shown in FIG. 1A. For example, some or all of the set of data 122 regarding the outdoor transportation infrastructure may be determined from information or data previously captured during travel within the area or environment, e.g., based at least in part on one or more time stamps, net speeds, courses, angles of orientation, levels of traffic congestion, sizes or dimensions of any payloads carried, operational or environmental conditions, or any other information or data, captured or otherwise determined by autonomous vehicles or by one or more persons or machines. Moreover, some or all of the set of data regarding the outdoor transportation infrastructure may be updated in real time or in near-real time, on a synchronous or asynchronous basis.”, Supplemental Note: like the customized maps, the (2d) baseline maps can also be updated from the 3d information captured by the autonomous vehicle) Regarding claim 8, Baalke, as modified, teaches wherein the 3-dimensional map divides the manufacturing line into a plurality of regions. (Baalke: Col. 12, lines 48 – 63: “The storage area 233 may include one or more predefined two-dimensional or three-dimensional spaces for accommodating items and/or containers of such items, such as aisles, rows, bays, shelves, slots, bins, racks, tiers, bars, hooks, cubbies or other like storage means, or any other appropriate regions or stations. The distribution station 235 may include one or more regions or stations where items that have been retrieved from a designated storage area may be evaluated, prepared and packed for delivery from the fulfillment center 230 to locations or destinations specified by customers, e.g., by way of one or more of the autonomous vehicles 250-1, 250-2 . . . 250-n, or any other vehicle of any type, e.g., cars, trucks, trailers, freight cars, container ships or cargo aircraft (e.g., manned aircraft or unmanned aircraft, such as drones).; Col. 29, lines 31 – 46: “In some embodiments, the autonomous vehicles 550-1, 550-2, 550-3 may also include a set of infrared proximity sensors arranged along a perimeter. Such infrared proximity sensors may be configured to output signals corresponding to proximities of objects (e.g., pedestrians) within predetermined ranges of the autonomous vehicles 550-1, 550-2, 550-3. One or more controllers within the autonomous vehicles 550-1, 550-2, 550-3 may fuse data streams from any of such sensors, e.g., LIDAR sensor(s), color camera(s), and/or proximity sensor(s), into a single real-time 3D color map of surfaces of objects (e.g., roads, sidewalks, road vehicles, pedestrians, or the like) around the autonomous vehicles 550-1, 550-2, 550-3 per scan cycle, and process the 3D color map into a crossing confidence score or other navigational decision during operations.”, Supplemental Note: the autonomous robot is able to color code objects and regions in a 3d environment) Regarding claim 11, Baalke, as modified, teaches wherein the plurality of vehicles each have a space in which a conveying-object is loaded and are further configured to transport the conveying-object. (Baalke: Col. 9, lines 34 – 47: “The autonomous vehicles of the present disclosure may include a cargo bay or other storage compartment, or multiple cargo bays or storage compartments, for storing items that are being delivered from an origin to a destination. Such cargo bays or storage compartments may be used to securely maintain items therein at any desired temperature, pressure or alignment or orientation, and to protect such items against the elements. Furthermore, in some embodiments, the autonomous vehicles may include various equipment or components for determining whether a cargo bay or other storage compartment is empty or includes one or more items, or for identifying specific items that are stored therein, along with equipment or components for engaging or interacting with such items.”) Regarding claim 12, Baalke, as modified, teaches wherein the measurement module further includes a vibration sensor and a noise sensor. (Baalke: Col. 14, lines 65 – 38: “As is also shown in FIG. 2B, the autonomous vehicle 250-i also includes one or more control systems 260-i, as well as one or more sensors 262-i,”; Col. 16, lines 55 – 65: “The sensor 262-i may further be one or more compasses, speedometers, altimeters, thermometers, barometers, hygrometers, gyroscopes, air monitoring sensors (e.g., oxygen, ozone, hydrogen, carbon monoxide or carbon dioxide sensors), ozone monitors, pH sensors, magnetic anomaly detectors, metal detectors, radiation sensors (e.g., Geiger counters, neutron detectors, alpha detectors), accelerometers, ranging sensors (e.g., radar or LIDAR ranging sensors) or sound sensors (e.g., microphones, piezoelectric sensors, vibration sensors or other transducers for detecting and recording acoustic energy from one or more directions).”) Regarding claim 13, Baalke teaches an autonomous driving device comprising: (Baalke: Abstract: “Customized navigation maps of an area are generated for autonomous vehicles based on a baseline map of the area, transportation systems within the area, and attributes of the autonomous vehicles.”) … a measurement module mounted on each of the plurality of vehicles and configured to measure a shape and a position of the manufacturing equipment in real time; (Baalke: Col. 29, lines 10 – 46: “In some embodiments, the autonomous vehicles 550-1, 550-2, 550-3 may include one or more omnidirectional or 360° LIDAR sensors or other range sensors arranged on the top of the autonomous vehicles 550-1, 550-2, 550-3 or in other locations or positions, such as at either end of the autonomous vehicles 550-1, 550-2, 550-3. For example, each of such sensors may generate one or more three-dimensional distance maps, e.g., in the form of a 3D point cloud representing distances at nominal ranges (e.g., between one meter and fifty meters) from the LIDAR sensor and an external surface within the field of view of the LIDAR sensor by rotating the LIDAR sensor (i.e., once per scan cycle). One or more controllers within the autonomous vehicles 550-1, 550-2, 550-3 may fuse data streams from any of such sensors, e.g., LIDAR sensor(s), color camera(s), and/or proximity sensor(s), into a single real-time 3D color map of surfaces of objects (e.g., roads, sidewalks, road vehicles, pedestrians, or the like) around the autonomous vehicles 550-1, 550-2, 550-3 per scan cycle, and process the 3D color map into a crossing confidence score or other navigational decision during operations.”) a map generating module configured to transmit information to and receive information from the measurement module; and (Baalke: Col. 3, lines 5 – 9: “The customized navigation maps that are generated for travel by autonomous vehicles within the area may thus be subject to constant revision as new or updated information or data regarding the area is identified.”; Col. 5, lines 19 – 23: “In accordance with embodiments of the present disclosure, a customized navigation map may be generated for an autonomous vehicle and used to select a route to be traveled by the autonomous vehicle during the performance of one or more tasks within an area or environment, such as a delivery of an item to a location within the area or environment.”) a control module configured to control driving of the plurality of vehicles, (Baalke: Col. 4, lines 1 – 4: “The server 192 may be associated with one or more systems for monitoring, maintaining, managing or otherwise controlling a fleet of one or more autonomous vehicles, e.g., a fleet management system for such vehicles.”) wherein the map generating module is further configured to generate a 3-dimensional map (Baalke: Col. 29, lines 10 – 46: “In some embodiments, the autonomous vehicles 550-1, 550-2, 550-3 may include one or more omnidirectional or 360° LIDAR sensors or other range sensors arranged on the top of the autonomous vehicles 550-1, 550-2, 550-3 or in other locations or positions, such as at either end of the autonomous vehicles 550-1, 550-2, 550-3. For example, each of such sensors may generate one or more three-dimensional distance maps, e.g., in the form of a 3D point cloud representing distances at nominal ranges (e.g., between one meter and fifty meters) from the LIDAR sensor and an external surface within the field of view of the LIDAR sensor by rotating the LIDAR sensor (i.e., once per scan cycle). One or more controllers within the autonomous vehicles 550-1, 550-2, 550-3 may fuse data streams from any of such sensors, e.g., LIDAR sensor(s), color camera(s), and/or proximity sensor(s), into a single real-time 3D color map of surfaces of objects (e.g., roads, sidewalks, road vehicles, pedestrians, or the like) around the autonomous vehicles 550-1, 550-2, 550-3 per scan cycle, and process the 3D color map into a crossing confidence score or other navigational decision during operations.”; Col. 3, lines 5 – 9: “The customized navigation maps that are generated for travel by autonomous vehicles within the area may thus be subject to constant revision as new or updated information or data regarding the area is identified.”; Col. 5, lines 5 – 17: “For example, some or all of the set of data 122 regarding the outdoor transportation infrastructure may be determined from information or data previously captured during travel within the area or environment, e.g., based at least in part on one or more time stamps, net speeds, courses, angles of orientation, levels of traffic congestion, sizes or dimensions of any payloads carried, operational or environmental conditions, or any other information or data, captured or otherwise determined by autonomous vehicles or by one or more persons or machines. Moreover, some or all of the set of data regarding the outdoor transportation infrastructure may be updated in real time or in near-real time, on a synchronous or asynchronous basis.”; Col. 7, lines 31 – 37: “The server 192 may utilize any information or data received from the autonomous vehicle 150, or from any other autonomous vehicles or other systems or machines (not shown), to update one or more of the baseline map 105, the sets of data 120, 122, 124, or any customized navigation maps generated based thereon, in accordance with the present disclosure.”, Supplemental Note: the sensors of the autonomous vehicle are able to be used to create a 3D map of the environment. The 3D mapping of the autonomous vehicle can be used to update the baseline/customized navigation maps) … wherein the map generating module is further configured to reflect the information about obstacles into the 3-dimensional map, (Baalke: Col. 29, lines 10 – 46: “In some embodiments, the autonomous vehicles 550-1, 550-2, 550-3 may include one or more omnidirectional or 360° LIDAR sensors or other range sensors arranged on the top of the autonomous vehicles 550-1, 550-2, 550-3 or in other locations or positions, such as at either end of the autonomous vehicles 550-1, 550-2, 550-3. For example, each of such sensors may generate one or more three-dimensional distance maps, e.g., in the form of a 3D point cloud representing distances at nominal ranges (e.g., between one meter and fifty meters) from the LIDAR sensor and an external surface within the field of view of the LIDAR sensor by rotating the LIDAR sensor (i.e., once per scan cycle). One or more controllers within the autonomous vehicles 550-1, 550-2, 550-3 may fuse data streams from any of such sensors, e.g., LIDAR sensor(s), color camera(s), and/or proximity sensor(s), into a single real-time 3D color map of surfaces of objects (e.g., roads, sidewalks, road vehicles, pedestrians, or the like) around the autonomous vehicles 550-1, 550-2, 550-3 per scan cycle, and process the 3D color map into a crossing confidence score or other navigational decision during operations.”; Col. 34, line 55 – Col. 35, line 2: “The server 592 may continuously receive information or data regarding the area or environment (e.g., the infrastructure or features therein) from any source, and may update the customized navigation maps 525-1, 525-2, 525-3 generated for the autonomous vehicles 550-1, 550-2, 550-3. In some embodiments, information or data may be received from operating autonomous vehicles 550-4, 550-5, 550-6, 550-7, 550-8 within the area or environment, regarding their past, present or future operations, e.g., their respective speeds, courses, positions (e.g., latitudes and longitudes), elevations or angles of orientation (e.g., yaws, pitches or rolls), as well as operational or environmental conditions such as surface conditions, traffic conditions, congestion or any other relevant factors encountered by the autonomous vehicles to the server 592 or other networked computer systems.”, Supplemental Note: the customized navigation maps are able to be updated by the 3d sensors of the autonomous vehicles, therefore interpreted to be updated 3d maps. The vehicle is also able to detect obstacles such as traffic conditions and congestions along with other relevant factors encountered by the vehicle) … wherein the map generating module is further configured to generate the 3- dimensional map, based on the 2-dimensional map and the 3-dimensional modeling information, (Baalke: Col. 29, lines 10 – 46: “In some embodiments, the autonomous vehicles 550-1, 550-2, 550-3 may include one or more omnidirectional or 360° LIDAR sensors or other range sensors arranged on the top of the autonomous vehicles 550-1, 550-2, 550-3 or in other locations or positions, such as at either end of the autonomous vehicles 550-1, 550-2, 550-3. For example, each of such sensors may generate one or more three-dimensional distance maps, e.g., in the form of a 3D point cloud representing distances at nominal ranges (e.g., between one meter and fifty meters) from the LIDAR sensor and an external surface within the field of view of the LIDAR sensor by rotating the LIDAR sensor (i.e., once per scan cycle). One or more controllers within the autonomous vehicles 550-1, 550-2, 550-3 may fuse data streams from any of such sensors, e.g., LIDAR sensor(s), color camera(s), and/or proximity sensor(s), into a single real-time 3D color map of surfaces of objects (e.g., roads, sidewalks, road vehicles, pedestrians, or the like) around the autonomous vehicles 550-1, 550-2, 550-3 per scan cycle, and process the 3D color map into a crossing confidence score or other navigational decision during operations.”; Col. 34, line 55 – Col. 35, line 2: “The server 592 may continuously receive information or data regarding the area or environment (e.g., the infrastructure or features therein) from any source, and may update the customized navigation maps 525-1, 525-2, 525-3 generated for the autonomous vehicles 550-1, 550-2, 550-3. In some embodiments, information or data may be received from operating autonomous vehicles 550-4, 550-5, 550-6, 550-7, 550-8 within the area or environment, regarding their past, present or future operations, e.g., their respective speeds, courses, positions (e.g., latitudes and longitudes), elevations or angles of orientation (e.g., yaws, pitches or rolls), as well as operational or environmental conditions such as surface conditions, traffic conditions, congestion or any other relevant factors encountered by the autonomous vehicles to the server 592 or other networked computer systems.”, Supplemental Note: the customized navigation maps are able to be updated by the 3d sensors of the autonomous vehicles, therefore interpreted to be updated 3d maps) wherein the map generating module is further configured to generate the 3- dimensional map including a map in which the 3-dimensional modeling information missing from the 2-dimensional map is restored, by comparing the 3-dimensional modeling information with the 2-dimensional map. (Baalke: Col. 29, lines 10 – 46: “In some embodiments, the autonomous vehicles 550-1, 550-2, 550-3 may include one or more omnidirectional or 360° LIDAR sensors or other range sensors arranged on the top of the autonomous vehicles 550-1, 550-2, 550-3 or in other locations or positions, such as at either end of the autonomous vehicles 550-1, 550-2, 550-3. For example, each of such sensors may generate one or more three-dimensional distance maps, e.g., in the form of a 3D point cloud representing distances at nominal ranges (e.g., between one meter and fifty meters) from the LIDAR sensor and an external surface within the field of view of the LIDAR sensor by rotating the LIDAR sensor (i.e., once per scan cycle). One or more controllers within the autonomous vehicles 550-1, 550-2, 550-3 may fuse data streams from any of such sensors, e.g., LIDAR sensor(s), color camera(s), and/or proximity sensor(s), into a single real-time 3D color map of surfaces of objects (e.g., roads, sidewalks, road vehicles, pedestrians, or the like) around the autonomous vehicles 550-1, 550-2, 550-3 per scan cycle, and process the 3D color map into a crossing confidence score or other navigational decision during operations.”; Col. 34, line 55 – Col. 35, line 2: “The server 592 may continuously receive information or data regarding the area or environment (e.g., the infrastructure or features therein) from any source, and may update the customized navigation maps 525-1, 525-2, 525-3 generated for the autonomous vehicles 550-1, 550-2, 550-3. In some embodiments, information or data may be received from operating autonomous vehicles 550-4, 550-5, 550-6, 550-7, 550-8 within the area or environment, regarding their past, present or future operations, e.g., their respective speeds, courses, positions (e.g., latitudes and longitudes), elevations or angles of orientation (e.g., yaws, pitches or rolls), as well as operational or environmental conditions such as surface conditions, traffic conditions, congestion or any other relevant factors encountered by the autonomous vehicles to the server 592 or other networked computer systems.”; Col. 4, line 55 – Col. 5, line 17: “The set of data 122 may thus include one or more layers or other forms of data defining a plurality of paths extending over or along the outdoor transportation infrastructure within the area or environment, and may have been identified or determined from any source, e.g., from one or more networked data stores, or the like, including but not limited to the same sources as the baseline map 105 shown in FIG. 1A. For example, some or all of the set of data 122 regarding the outdoor transportation infrastructure may be determined from information or data previously captured during travel within the area or environment, e.g., based at least in part on one or more time stamps, net speeds, courses, angles of orientation, levels of traffic congestion, sizes or dimensions of any payloads carried, operational or environmental conditions, or any other information or data, captured or otherwise determined by autonomous vehicles or by one or more persons or machines. Moreover, some or all of the set of data regarding the outdoor transportation infrastructure may be updated in real time or in near-real time, on a synchronous or asynchronous basis.”, Supplemental Note: like the customized maps, the (2d) baseline maps can also be updated from the 3d information captured by the autonomous vehicle) In sum Baalke teaches an autonomous driving device comprising: a measurement module mounted on each of the plurality of vehicles and configured to measure a shape and a position of the manufacturing equipment in real time; a map generating module configured to transmit information to and receive information from the measurement module; and a control module configured to control driving of the plurality of vehicles, wherein the map generating module is further configured to generate a 3-dimensional map wherein the map generating module is further configured to reflect the information about obstacles into the 3-dimensional map, wherein the map generating module is further configured to generate the 3- dimensional map, based on the 2-dimensional map and the 3-dimensional modeling information, wherein the map generating module is further configured to generate the 3- dimensional map including a map in which the 3-dimensional modeling information missing from the 2-dimensional map is restored, by comparing the 3-dimensional modeling information with the 2-dimensional map. Baalke however does not teach a plurality of vehicles configured to travel on a ceiling of a manufacturing line in which manufacturing equipment is arranged; a traveling rail arranged along the ceiling of the manufacturing line to provide a movement path for each of the plurality of vehicles; a sheathing condition and a grounding condition of a cable mounted on the traveling rail, and obstacles located on the traveling rail whereas Lee does. Lee teaches a plurality of vehicles configured to travel on a ceiling of a manufacturing line in which manufacturing equipment is arranged; a traveling rail arranged along the ceiling of the manufacturing line to provide a movement path for each of the plurality of vehicles; (Lee: Paragraph 0042: “The transport vehicle 20 travels along a rail 10 installed on a ceiling and has a wireless interface to communicate with a high-level server (a vehicle control apparatus) providing a transporting operation command.”; Paragraph 0043: “The rail 10 and a plurality of transport vehicles 20 may be provided. The rail 10 forms a transporting path (for example, a ceiling rail) for transporting goods from one manufacturing equipment item 1 to another. The plurality of transport vehicles 20 transport the goods to one manufacturing equipment item 1 to another while traveling along the rail 10.”) … a sheathing condition and a grounding condition of a cable mounted on the traveling rail, and (Lee: Paragraph 0043: “At this point, the transport vehicle 20 transporting the goods may receive its motive power through an electricity supply unit (for example, a power supply cable) provided along the rail 10.”, Supplemental Note: please refer to section Claim Rejections - 35 USC § 112 for the indefiniteness rejection of this claim limitation) obstacles located on the traveling rail, (Lee: Paragraph 0042: “According to a command of an integrated control system, the vehicle control apparatus searches for the shortest path from a starting point to a destination to finish a transporting operation in the least amount of time and selects the transport vehicle 20 positioned at an optimal position for performing a transporting operation. Then, the vehicle control apparatus provides a transporting command to the selected transport vehicle 20.”; Paragraph 0043: “With reference to FIG. 1, the manufacturing equipment items 1 for performing processes are installed in the semiconductor or display manufacturing line. The rail 10 and a plurality of transport vehicles 20 may be provided. The rail 10 forms a transporting path (for example, a ceiling rail) for transporting goods from one manufacturing equipment item 1 to another. The plurality of transport vehicles 20 transport the goods to one manufacturing equipment item 1 to another while traveling along the rail 10. At this point, the transport vehicle 20 transporting the goods may receive its motive power through an electricity supply unit (for example, a power supply cable) provided along the rail 10.”, Supplemental Note: multiple vehicles can be utilized in this system, thus the locations of multiple vehicles are known when determining a shortest path for a vehicle to travel to a destination) Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Baalke with the teachings of Lee with a reasonable expectation of success. Please refer to the rejection of claim 1 as both state the same functional language and therefore rejected under the same pretenses. Baalke in view of Lee however still do not teach that 3-dimensionally represents the manufacturing line, and the measurement module is further configured to generate 3-dimensional modeling information including information about a structure of the manufacturing line and the shape and position of the manufacturing equipment whereas Al-Yousef does. Al-Yousef teaches that 3-dimensionally represents the manufacturing line, and (Al-Yousef: Abstract: “The present disclosure describes a computer-implemented method to manage an industrial plant facility”; Col. 5, lines 46 – 55: “The real time visualization progression (RTVP) 121B can display real-time build out activities program. For example, the RTVP 121B may superimpose build out 3D image with the real-time progress feed. The RTVP 121B may provide dashboard and reporting capabilities on both construction progress and safety behavior metrics. The RTVP 121B may have the capability to detect schedule and geometric mismatch between the real-time captured 3D module and the 3D planned design.”) the measurement module is further configured to generate 3-dimensional modeling information including information about a structure of the manufacturing line and the shape and position of the manufacturing equipment, (Al-Yousef: Col. 5, line 61 – Col. 6, line 13: “The safety and quality monitoring system (SQMS) 122B can be embodied as a construction project quality monitoring system (CPQMS) and a construction project safety monitoring system (CPSMS). In case of the CPQMS, SQMS 122B can project construction progress visualization through the creation of 3D models from videos and imagery taken either by ground CCTV systems or aerial photogrammetry such as from drones. The data from the 3D models are correlated with asset construction progress from resource management. This can include 2D engineering tools, materials management, project controls, scheduling systems, and video and analytics systems. The SQMS 122B can provide 3D scanning to capture the construction status and verify it against the design basis in the 3D model to ensure that future construction and operation will proceed smoothly and identify any quality issues as early as possible. The SQMS 122B can provide true 3D model from a circular aperture or multiple single aperture, with high definition resolution (mm to km), provides active and passive 3D modeling, and allows identification and tagging for industrial equipment.”; Col. 10, lines 3 – 5: “If the determination is that the incident is in a hazardous area, the process may proceed to acknowledge the alarm and second a position tag to a safety coordinator (325A)”, Supplemental Note: the 3d model can identify and tag the industrial equipment. Tagging can be a position tag, thus able to locate the industrial equipment as well) Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Baalke with the teachings of Al-Yousef with a reasonable expectation of success. Please refer to the rejection of claim 1 as both state the same functional language and therefore rejected under the same pretenses. Baalke in view of Al-Yousef however still do not teach wherein the measurement module is further configured to generate 3- dimensional modeling information based on a 2-dimensional map and transmit the 3- dimensional modeling information to the map generating module whereas Daisuke does. Daisuke teaches wherein the measurement module is further configured to generate 3- dimensional modeling information based on a 2-dimensional map and transmit the 3- dimensional modeling information to the map generating module, (Daisuke: Paragraph 0007: “ A first aspect of the present invention provides an information management apparatus comprising: receiving means for receiving images and photographing information in the equipment floor photographed by the robot from a self-propelled robot in the equipment floor; three-dimensional map data A storage unit that stores the image, a storage unit that is attached to a device installed on the equipment floor, detects a code indicating information specifying the device from the image, and acquires information identifying the device from the code Dimensional map data on the basis of the photographing information, information specifying the device indicated by the code and position information on the three-dimensional map data Mapping means for associating an alert indicating an abnormality of a device installed on the equipment floor, There characterized by having a display means for displaying the image of the photographed the equipment in the floor.”; Paragraph 0015: “The camera 11 photographs the surroundings of the self-propelled robot 1. The image photographed by the camera 11 and the photographing information such as the photographing direction are transmitted from the transmitter 14 to the server 3.”, Supplemental Note: the vehicle is able to capture images, interpreted as 2-dimensional map, and gather information for three-dimensional map data) Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Baalke with the teachings of Daisuke with a reasonable expectation of success. Please refer to the rejection of claim 1 as both state the same functional language and therefore rejected under the same pretenses. Regarding claim 14, Baalke, as modified, teaches wherein the measurement module includes an optical measurement device. (Baalke: Col. 14, lines 65 – 38: “As is also shown in FIG. 2B, the autonomous vehicle 250-i also includes one or more control systems 260-i, as well as one or more sensors 262-i,”; Col. 15, lines 42 – 53: “The sensor 262-i may also be an imaging device including any form of optical recording sensor or device (e.g., digital cameras, depth sensors or range cameras, infrared cameras, radiographic cameras or other optical sensors) that may be configured to photograph or otherwise capture imaging data (e.g., still or moving images in color or black and white that may be captured at any frame rates, or depth imaging data such as ranges), or associated audio information or data, or metadata, regarding objects or activities occurring within a vicinity of the autonomous vehicle 250-i, or for any other purpose.”) Regarding claim 15, Baalke, as modified, teaches wherein the map generating module is further configured to store the position of the manufacturing equipment, which is measured by the measurement module, in the 3-dimensional map. (Baalke: Col. 7, lines 26 – 37: “As is further shown in FIG. 1J, the autonomous vehicle 150 may report information or data regarding the presence and locations of the access point 141-3 or the obstacle 148-1, or any other operational conditions or statuses affecting the route 135-3, to the server 192 over the network 180. The server 192 may utilize any information or data received from the autonomous vehicle 150, or from any other autonomous vehicles or other systems or machines (not shown), to update one or more of the baseline map 105, the sets of data 120, 122, 124, or any customized navigation maps generated based thereon, in accordance with the present disclosure.”; Col. 29, lines 10 – 46: “In some embodiments, the autonomous vehicles 550-1, 550-2, 550-3 may include one or more omnidirectional or 360° LIDAR sensors or other range sensors arranged on the top of the autonomous vehicles 550-1, 550-2, 550-3 or in other locations or positions, such as at either end of the autonomous vehicles 550-1, 550-2, 550-3. For example, each of such sensors may generate one or more three-dimensional distance maps, e.g., in the form of a 3D point cloud representing distances at nominal ranges (e.g., between one meter and fifty meters) from the LIDAR sensor and an external surface within the field of view of the LIDAR sensor by rotating the LIDAR sensor (i.e., once per scan cycle). One or more controllers within the autonomous vehicles 550-1, 550-2, 550-3 may fuse data streams from any of such sensors, e.g., LIDAR sensor(s), color camera(s), and/or proximity sensor(s), into a single real-time 3D color map of surfaces of objects (e.g., roads, sidewalks, road vehicles, pedestrians, or the like) around the autonomous vehicles 550-1, 550-2, 550-3 per scan cycle, and process the 3D color map into a crossing confidence score or other navigational decision during operations.”, Supplemental Note: the customized maps can be updated by the data received from the autonomous robot. The 3d sensors of the autonomous vehicle are able to identify obstacles) Regarding claim 16, Baalke, as modified, teaches further comprising an obstacle sensing module configured to sense an obstacle. (Baalke: Col. 29, lines 10 – 46: “In some embodiments, the autonomous vehicles 550-1, 550-2, 550-3 may include one or more omnidirectional or 360° LIDAR sensors or other range sensors arranged on the top of the autonomous vehicles 550-1, 550-2, 550-3 or in other locations or positions, such as at either end of the autonomous vehicles 550-1, 550-2, 550-3. For example, each of such sensors may generate one or more three-dimensional distance maps, e.g., in the form of a 3D point cloud representing distances at nominal ranges (e.g., between one meter and fifty meters) from the LIDAR sensor and an external surface within the field of view of the LIDAR sensor by rotating the LIDAR sensor (i.e., once per scan cycle). One or more controllers within the autonomous vehicles 550-1, 550-2, 550-3 may fuse data streams from any of such sensors, e.g., LIDAR sensor(s), color camera(s), and/or proximity sensor(s), into a single real-time 3D color map of surfaces of objects (e.g., roads, sidewalks, road vehicles, pedestrians, or the like) around the autonomous vehicles 550-1, 550-2, 550-3 per scan cycle, and process the 3D color map into a crossing confidence score or other navigational decision during operations.”) Regarding claim 17, Baalke, as modified, teaches wherein, when the obstacle sensing module senses the obstacle, the map generating module is further configured to add information of the obstacle to the 3-dimensional map. (Baalke: Col. 7, lines 26 – 37: “As is further shown in FIG. 1J, the autonomous vehicle 150 may report information or data regarding the presence and locations of the access point 141-3 or the obstacle 148-1, or any other operational conditions or statuses affecting the route 135-3, to the server 192 over the network 180. The server 192 may utilize any information or data received from the autonomous vehicle 150, or from any other autonomous vehicles or other systems or machines (not shown), to update one or more of the baseline map 105, the sets of data 120, 122, 124, or any customized navigation maps generated based thereon, in accordance with the present disclosure.”; Col. 29, lines 10 – 46: “In some embodiments, the autonomous vehicles 550-1, 550-2, 550-3 may include one or more omnidirectional or 360° LIDAR sensors or other range sensors arranged on the top of the autonomous vehicles 550-1, 550-2, 550-3 or in other locations or positions, such as at either end of the autonomous vehicles 550-1, 550-2, 550-3. For example, each of such sensors may generate one or more three-dimensional distance maps, e.g., in the form of a 3D point cloud representing distances at nominal ranges (e.g., between one meter and fifty meters) from the LIDAR sensor and an external surface within the field of view of the LIDAR sensor by rotating the LIDAR sensor (i.e., once per scan cycle). One or more controllers within the autonomous vehicles 550-1, 550-2, 550-3 may fuse data streams from any of such sensors, e.g., LIDAR sensor(s), color camera(s), and/or proximity sensor(s), into a single real-time 3D color map of surfaces of objects (e.g., roads, sidewalks, road vehicles, pedestrians, or the like) around the autonomous vehicles 550-1, 550-2, 550-3 per scan cycle, and process the 3D color map into a crossing confidence score or other navigational decision during operations.”, Supplemental Note: the customized maps can be updated by the data received from the autonomous robot. The 3d sensors of the autonomous vehicle are able to identify obstacles) Regarding claim 18, Baalke, as modified, teaches wherein the map generating module is further configured to, when the map generating module adds the information of the obstacle to the 3-dimensional map, (Baalke: Col. 7, lines 26 – 37: “As is further shown in FIG. 1J, the autonomous vehicle 150 may report information or data regarding the presence and locations of the access point 141-3 or the obstacle 148-1, or any other operational conditions or statuses affecting the route 135-3, to the server 192 over the network 180. The server 192 may utilize any information or data received from the autonomous vehicle 150, or from any other autonomous vehicles or other systems or machines (not shown), to update one or more of the baseline map 105, the sets of data 120, 122, 124, or any customized navigation maps generated based thereon, in accordance with the present disclosure.”; Col. 29, lines 10 – 46: “In some embodiments, the autonomous vehicles 550-1, 550-2, 550-3 may include one or more omnidirectional or 360° LIDAR sensors or other range sensors arranged on the top of the autonomous vehicles 550-1, 550-2, 550-3 or in other locations or positions, such as at either end of the autonomous vehicles 550-1, 550-2, 550-3. For example, each of such sensors may generate one or more three-dimensional distance maps, e.g., in the form of a 3D point cloud representing distances at nominal ranges (e.g., between one meter and fifty meters) from the LIDAR sensor and an external surface within the field of view of the LIDAR sensor by rotating the LIDAR sensor (i.e., once per scan cycle). One or more controllers within the autonomous vehicles 550-1, 550-2, 550-3 may fuse data streams from any of such sensors, e.g., LIDAR sensor(s), color camera(s), and/or proximity sensor(s), into a single real-time 3D color map of surfaces of objects (e.g., roads, sidewalks, road vehicles, pedestrians, or the like) around the autonomous vehicles 550-1, 550-2, 550-3 per scan cycle, and process the 3D color map into a crossing confidence score or other navigational decision during operations.”, Supplemental Note: the customized maps can be updated by the data received from the autonomous robot. The 3d sensors of the autonomous vehicle are able to identify obstacles) transmit the 3-dimensional map to the plurality of vehicles. (Baalke: Col. 3, lines 5 – 9: “The customized navigation maps that are generated for travel by autonomous vehicles within the area may thus be subject to constant revision as new or updated information or data regarding the area is identified.”; Col. 5, lines 19 – 23: “In accordance with embodiments of the present disclosure, a customized navigation map may be generated for an autonomous vehicle and used to select a route to be traveled by the autonomous vehicle during the performance of one or more tasks within an area or environment, such as a delivery of an item to a location within the area or environment.”) Regarding claim 19, Baalke, as modified, teaches wherein the control module is further configured to, when the obstacle sensing module senses the obstacle, control driving of the plurality of vehicles. (Baalke: Col. 10, lines 2 – 9: “Information or data obtained or determined by such sensors or such communications equipment may be utilized in manually or automatically controlling an autonomous vehicle, e.g., in causing the autonomous vehicle to travel along one or more paths or routes, to search for alternate paths or routes, or to avoid expected or unexpected hazards encountered by the autonomous vehicle while traveling along such paths or routes.”) Claims 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Baalke et al. (US 11232391 B1), Al-Yousef et al. (US 11341830 B2), Lee et al. (US 20220126699 A1) and Daisuke et al. (JP6412974B1) as applied to claim 1 above, and further in view of Liaqat et al. (US 20220187428 A1) and Hirohumi et al. (WO2011125233A1). Regarding claim 9, Baalke, as modified, teaches wherein the plurality of regions include a first region and a second region … and the second region is a region in which there is a difference between the 2- dimensional map and the 3-dimensional modeling information. (Baalke: Col. 7, lines 26 – 37: “As is further shown in FIG. 1J, the autonomous vehicle 150 may report information or data regarding the presence and locations of the access point 141-3 or the obstacle 148-1, or any other operational conditions or statuses affecting the route 135-3, to the server 192 over the network 180. The server 192 may utilize any information or data received from the autonomous vehicle 150, or from any other autonomous vehicles or other systems or machines (not shown), to update one or more of the baseline map 105, the sets of data 120, 122, 124, or any customized navigation maps generated based thereon, in accordance with the present disclosure.”; Col. 29, lines 31 – 46: “In some embodiments, the autonomous vehicles 550-1, 550-2, 550-3 may also include a set of infrared proximity sensors arranged along a perimeter. Such infrared proximity sensors may be configured to output signals corresponding to proximities of objects (e.g., pedestrians) within predetermined ranges of the autonomous vehicles 550-1, 550-2, 550-3. One or more controllers within the autonomous vehicles 550-1, 550-2, 550-3 may fuse data streams from any of such sensors, e.g., LIDAR sensor(s), color camera(s), and/or proximity sensor(s), into a single real-time 3D color map of surfaces of objects (e.g., roads, sidewalks, road vehicles, pedestrians, or the like) around the autonomous vehicles 550-1, 550-2, 550-3 per scan cycle, and process the 3D color map into a crossing confidence score or other navigational decision during operations.”, Supplemental Note: based on the data from the autonomous robot, missing or new data can be used to update the customized navigation and baseline maps) In sum, Baalke teaches wherein the plurality of regions comprise a first region and a second region and the second region is a region in which there is a difference between the 2- dimensional map and the 3-dimensional modeling information. Baalke however does not teach the first region is a region in which the manufacturing equipment abnormally operating is located whereas Liaqat does. Liaqat teaches the first region is a region in which the manufacturing equipment is determined to be abnormally operating(Liaqat: Abstract: “Disclosed is an autonomous inspection system comprising at least one autonomous mobile robot. The robot has a plurality of two-dimensional LiDAR scanners each scanner having a two-dimensional scanning plane. The plurality of scanners are mounted on the autonomous mobile robot with scanning plane orientations which are non-coplanar. “; Paragraph 0008: “A further aspect of the invention provides a machine readable storage medium comprising instructions executable by a processor to: receive two-dimensional point data from a plurality of LiDAR scans, stitch the two-dimensional point data to form a three-dimensional plot; combine the LiDAR scan data with stored mapping data; and identify features of objects within the three-dimensional plot and compare the dimensions and/or profile of the identified features to flag defects or non-conformities in the scanned objects.”; Paragraph 0013: “Scanning using fixed two-dimensional LiDAR scanners on an autonomous mobile robot platform may also enable significant time savings in comparison to existing or manual or automated measurements system. For example, it may be noted that two-dimensional LiDAR scanners may provide accurate position data at ranges up to the region of 20-25 m from the scanner. Thus, it will be appreciated that embodiments may scan large items quickly whilst also providing high quality data. This is particularly useful in Aircraft manufacturing or MRO facilities due to the scale of aircraft and airframe structures. As such, embodiments of the invention may particularly be configured for use in aerospace inspection. For example, for MRO facilities and the like, embodiments of the invention may be used for surface defect identification and inspection. In aircraft manufacturing applications, embodiments may for example be used to confirm shape conformance of aircraft structures.”, Supplemental Note: the autonomous robot is able to scan the environment and create maps that flag defects in the manufacturing plant, interpreted as manufacturing equipment operating abnormally) Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Baalke with the teachings of Liaqat with a reasonable expectation of success. Both Baalke and Liaqat teach the ability of an autonomous robot able to identify its surroundings. Liaqat further teaches the ability of its autonomous robot able to flag defects in manufacturing equipment whereas Baalke teaches the ability to map out the environment on a 3d point cloud and a 3d color map. Baalke however is unable to gather information about the equipment and able to mark if one is defective. One with knowledge in the art would find this to be apply this flagging technique of Liaqat with the autonomous robot as taught by Baalke to be a use of a known technique to improve similar devices in the same way. For example, the autonomous robot can now flag out regions of the environment which have a defective equipment map them with an corresponding color in the 3d color map, thus wherever that color is used, the system of Baalke is able to identify the defective equipment. Baalke in view of Liaqat however still do not teach the abnormally is based on vibration and noise measured by the measurement module whereas Hirohumi does. Hirohumi teaches based on vibration and noise measured by the measurement module, (Hirohumi: Lines 59 – 65: “The above object can be achieved by an apparatus abnormality detection device according to the present invention. In summary, according to the present invention, there is provided an abnormality detecting device for detecting an abnormal sound of a device and detecting an abnormality of the device, comprising: a vibration detecting portion for detecting a vibration of the device in contact with a portion to be measured of the device; A sound propagation unit that propagates vibrations detected by the vibration detection unit, a space portion that converts the vibration propagated by the vibration propagation unit into sound, a sound collection unit that converts sound generated in the space portion into an electric signal,”; Lines 77 – 82: “According to the device abnormality detection device of the present invention, (1) it is possible to clearly listen to raw abnormal sound generated by the device at a huge facility such as a refinery or a petrochemical-related factory, It can be accurately detected. (2) It is also possible to clearly record raw abnormal sounds generated by the equipment, and hear it by others. (3) It is possible to clearly distinguish between the internal abnormal sound generated by the device and the external sound around the device, and accurately detect the abnormality of the device. (4) It is simple in structure, compact, lightweight, inexpensive, and easy to operate.”) Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Baalke with the teachings of Hirohumi with a reasonable expectation of success. One with knowledge in the art would find it obvious to try to combine the abnormality detection device with the self-driving robot of Baalke as it improves the robot’s capabilities of detecting abnormal sounds which can lead to detecting abnormality of a device. For example, the robot as taught by Baalke with this detection device as taught by Hirohumi, attached on itself will be able to more accurately determine any abnormalities of any equipment it travels to. This mitigates the need for another device or person to manually check the abnormality of the device as it can be done by the robot instead, thus increasing the robot’s capabilities. Regarding claim 10, Baalke, as modified, teaches wherein, on the display, the first region is shown in a first color, and the second region is shown in a second color that is different from the first color. (Baalke: Col. 29, lines 31 – 46: “In some embodiments, the autonomous vehicles 550-1, 550-2, 550-3 may also include a set of infrared proximity sensors arranged along a perimeter. Such infrared proximity sensors may be configured to output signals corresponding to proximities of objects (e.g., pedestrians) within predetermined ranges of the autonomous vehicles 550-1, 550-2, 550-3. One or more controllers within the autonomous vehicles 550-1, 550-2, 550-3 may fuse data streams from any of such sensors, e.g., LIDAR sensor(s), color camera(s), and/or proximity sensor(s), into a single real-time 3D color map of surfaces of objects (e.g., roads, sidewalks, road vehicles, pedestrians, or the like) around the autonomous vehicles 550-1, 550-2, 550-3 per scan cycle, and process the 3D color map into a crossing confidence score or other navigational decision during operations.”: Col. 9, lines 47 – 53: “The autonomous vehicles may also include one or more display screens (e.g., touchscreen displays, scanners, keypads) having one or more user interfaces for displaying information regarding such vehicles or their contents to humans, or for receiving interactions (e.g., instructions) from such humans, or other input/output devices for such purposes.”, Supplemental Note: the 3d map can be in color which are coded to different regions/objects around the autonomous robot) In sum, Baalke teaches wherein, on the display, the first region is shown in a first color, and the second region is shown in a second color that is different from the first color. Baalke however does not teach a display displaying the 3-dimensional map whereas Al-Yousef does. Al-Yousef teaches a display displaying the 3-dimensional map (Al-Yousef: Col. 5, lines 47 – 55: “The real time visualization progression (RTVP) 121B can display real-time build out activities program. For example, the RTVP 121B may superimpose build out 3D image with the real-time progress feed. The RTVP 121B may provide dashboard and reporting capabilities on both construction progress and safety behavior metrics. The RTVP 121B may have the capability to detect schedule and geometric mismatch between the real-time captured 3D module and the 3D planned design”) Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Baalke with the teachings of Al-Yousef with a reasonable expectation of success. Baalke and Al-Yousef teach the ability to create a 3d map representing the environment where Al-Yousef teaches the ability to display the 3d environment and the autonomous robot of Baalke consists of a display. The ability to view the 3d environment onto the display itself would be obvious to try by one with knowledge in the art. This improves the functionality of the autonomous robot as a personal can access the 3d map by the robot’s display whereas they currently cannot. For example, in an industrial plant environment, a manager may want to inspect the facility which they now can directly on the autonomous robot, thus improving the efficiency of the autonomous robot. Response to Arguments Applicant’s arguments, see section Rejections under 35 USC 112(b) of the REMARKS, filed 12/17/2025, with respect to the 35 USC 112(b) indefiniteness rejection of claims 3 – 10 have been fully considered but are moot. Applicant has amended the claim limitation of “wherein the measurement module is further configured to generate 3-dimensional modeling information based on a 2-dimensional map and transmit the 3- dimensional modeling information to the map generating module, the 2-dimensional map representing the manufacturing line as a 2-dimensional grid structure, and the 3-dimensional modeling information comprises includes information about the structure of the manufacturing line and a shape and the position of the manufacturing equipment” to claim 1, however the claim limitation is still indefinite. Furthermore, the Applicant does not provide a proper response regards to the indefiniteness of the claim limitation. The 35 USC 112(b) indefiniteness rejection still stands within the context of claim 1 and is stated above in section Claim Rejections - 35 USC § 112. Applicant’s arguments, see section Rejections under 35 USC 103 – Baalke in view of Al-Yousef and Seung of the REMARKS, filed 12/17/2025, with respect to the 35 USC 103 prior art rejection of claims 1 – 8, 11, 12 and 20 have been considered but are not fully persuasive. Applicant states regarding the amendments of claim 1, that neither Baalke in view of Al-Yousef and Seung teach an autonomous driving device where, while each vehicle is traveling, a measurement module measures (i) a structure of a manufacturing line, (ii) a position of the manufacturing equipment on the floor, (iii) a sheathing condition and grounding condition of a cable mounted on a traveling rail, and (iv) obstacles located on the traveling rail, with the map generating module reflecting the rail-obstacle information in a shared 3D map. However the applicant does not provide a proper response regards to the prior art rejection by discussing the references applied against the claims, explaining how the claims avoid the references or distinguish from them and therefore is moot. Please see section Claim Rejections - 35 USC § 103 for the citations of the prior art used to reject the amended claim limitations. Applicant further states Baalke, Al-Yousef and Daisuke are directed towards floor traveling robots that do not disclose rail structures whereas Seung does but is silent as to a cable sheathing/grounding diagnosis and rail-obstacle mapping into a global 3D map. Applicant states Examiner relies on impermissible hindsight reconstruction to arrive and these combination of claimed featured. Examiner agrees on the applicant’s assertion that Seung does not teach sheathing/grounding diagnosis and rail-obstacle mapping into a 3D map. Through further search and consideration however another prior art of Lee (US 20220126699 A1) is used to teach the claim limitation of a traveling rail system arranged on the ceiling of a facility which can be utilized by transport vehicles for traveling and locating other vehicles on the traveling rail. Examiner respectfully disagrees on the claim amendments to be interpreted to teach a sheathing/grounding diagnosis. The claim states “wherein the measurement module is configured to measure… a sheathing condition and a grounding condition of a cable mounted on the traveling rail”, however it is indefinite on how the measurement module is configured to measure a sheathing condition and a grounding condition. The measurement module as stated within the claim has a LiDAR sensor and a 3-dimensional sensor, thus it is indefinite what data is being captured to measure a sheathing condition and a grounding condition. A sheathing condition and grounding condition of a cable is not a known term in the art wherein the claims do not provide enough detail on what these conditions would encompass and what data is being collected by the LiDAR and 3-dimensional sensor to evaluate these conditions. For example, the measurement module is further used to detect a position of a manufacturing equipment on the floor which to one with knowledge in the art can be obtained by the use of these claimed sensors. Please see section Claim Rejections - 35 USC § 112 for the indefiniteness rejection of this claim. Due to this indefinites, the sheathing condition and grounding condition of a cable mounted on the traveling rail is interpreted as a vehicle on the traveling rail having a cable attached to it as it is indefinite to interpret what measuring a sheathing and grounding condition is within the context of the claim. The prior art of Lee does teach this interpretation as it does state the traveling device does have a power supply cable provided along the traveling rail. The power supply of Lee is also a simple substitution with the robot of Baalke’s power module as both are used to power their respective vehicle/robot. Furthermore, Lee teaches the ability to track vehicle positions, interpreted as obstacles, thus other vehicles on the track are identified when determining a shortest path to a destination. The ability to create a 3D map is taught by the primary art of Baalke. Please refer to section Claim Rejections - 35 USC § 103 for the prior art citations used to reject the amended claim limitations and the motivation to combine. Applicant’s arguments, see section Rejections under 35 U.S.C. §103 - Baalke, Al-Yousef, and Seung, and further in view of various combinations of Daisuke, Liaqat and Hirohumi of the REMARKS, filed 12/17/2025, with respect to the 35 USC 103 prior art rejection of claims 3 – 10 are moot. Applicant must discuss the references applied against the claims, explaining how the claims avoid the references or distinguish from them. Applicant’s arguments, see section Rejections under 35 U.S.C. §103 – Baalke in view of Al-Yousef of the REMARKS, filed 12/17/2025, with respect to the 35 USC 103 prior art rejection of claims 13 - 17 are moot. Applicant states claim 13 states the same limitations as claim 1, thus the claim and it’s dependent claims 14 – 17 are patentable. Examiner respectfully disagrees and states claim 13 is rejected for the same reasons as claim 1 as stated above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIVAM SHARMA whose telephone number is (703)756-1726. The examiner can normally be reached Monday-Friday 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Bishop can be reached at 571-270-3713. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHIVAM SHARMA/Examiner, Art Unit 3665 /Erin D Bishop/Supervisory Patent Examiner, Art Unit 3665
Read full office action

Prosecution Timeline

Sep 22, 2023
Application Filed
Mar 20, 2025
Non-Final Rejection — §103, §112
Jun 25, 2025
Response Filed
Oct 01, 2025
Final Rejection — §103, §112
Dec 17, 2025
Request for Continued Examination
Jan 20, 2026
Response after Non-Final Action
Jan 29, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12491869
METHOD FOR CONTROLLING VEHICLE, VEHICLE AND ELECTRONIC DEVICE
2y 5m to grant Granted Dec 09, 2025
Patent 12485897
METHOD FOR DETERMINING PASSAGE OF AUTONOMOUS VEHICLE AND RELATED DEVICE
2y 5m to grant Granted Dec 02, 2025
Patent 12434722
METHODS AND SYSTEMS FOR LATERAL CONTROL OF A VEHICLE
2y 5m to grant Granted Oct 07, 2025
Patent 12427919
VEHICLE BLIND-SPOT REDUCTION DEVICE
2y 5m to grant Granted Sep 30, 2025
Patent 12406535
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
43%
With Interview (-1.3%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 34 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month