DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Claims 13-16 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected invention, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 11/24/2025.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more.
[101 Analysis Step 1]
Step 1, of the 2019 Guidance, first looks to whether the claimed invention is directed to a statutory category, namely a process, machine, manufactures, and compositions of mater.
The claim 1 is directed to a method of operating an unmanned mobile vehicle (i.e. process) and claim 7 is directed to an apparatus of an unmanned mobile vehicle (i.e. machine). Thus, claims 1 and 7 are one of four the statutory categories (Step 1: YES).
[101 Analysis Step 2A, Prong I]
Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes.
Independent Claim 1 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim(s) for the remainder of the 101 rejection. Claim 1 recites:
A method of operating an unmanned mobile vehicle for detecting an indoor environment, comprising:
obtaining first motion information using a LiDAR sensor provided on the unmanned mobile vehicle;
obtaining second motion information using an inertial sensor provided on the unmanned mobile vehicle;
performing correction on the first motion information and the second motion information on the basis of error models corresponding to the LiDAR sensor and the inertial sensor; and
determining final position information of the unmanned mobile vehicle on the basis of the correction.
The examiner submits that the foregoing bolded limitations(s) constitute a “mental process” because under its broadest reasonable interpretations, the claim covers performance of the limitation in the human mind. For example, “performing…” and “determining…” in the context of the claim encompasses a person looking at and using the data collected to formulating a judgement. Accordingly, the claim recites at least one abstract idea.
[101 Analysis Step 2A, Prong II]
Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”):
A method of operating an unmanned mobile vehicle for detecting an indoor environment, comprising:
obtaining first motion information using a LiDAR sensor provided on the unmanned mobile vehicle;
obtaining second motion information using an inertial sensor provided on the unmanned mobile vehicle;
performing correction on the first motion information and the second motion information on the basis of error models corresponding to the LiDAR sensor and the inertial sensor; and
determining final position information of the unmanned mobile vehicle on the basis of the correction.
For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract into a practical applications.
Regarding the additional limitations of “obtaining first motion information using a LiDAR sensor provided on the unmanned mobile vehicle” and “obtaining second motion information using an inertial sensor provided on the unmanned mobile vehicle” the examiner submits that these limitations are insignificant extra-solution activities that merely use a sensors to perform the process. In particular, the obtaining steps can be performed via sensors are recited at a high level of generality (i.e. as a general means of gathering data for use in the performing step), and amounts to mere data gathering, which is a form of insignificant extra-solution activity.
Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical filed, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
[101 Analysis Step 2B]
Regarding Step 2B of the Revised Guidance, representative independent claims 1 and 7 do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. And as discussed above, the additional limitations of “obtaining first motion information using a LiDAR sensor provided on the unmanned mobile vehicle” and “obtaining second motion information using an inertial sensor provided on the unmanned mobile vehicle” the examiner submits that these limitations are insignificant extra-solution activities. Hence, the claims are not patent eligible.
Dependent claims 2-6 and 8-12 do not recite any further limitations that cause the claims to be directed towards statutory subject matter. The claims merely recite: abstract idea. Each of the further limitations expound upon the abstract ideas and do not recite additional elements integrating the abstract ideas into a practical application or additional elements that are not well-understood, routine or conventional. Therefore, dependent claims 2-6 and 8-12 are similarly rejected as being directed towards non-statutory subject matter.
Therefore, claims 1-12 is/are ineligible under 35 USC §101.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-3 and 7-9 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Pub No. US 2022/0276053 A1 to Scherzinger (Scherzinger).
In Reference to Claim 1
A method of operating an unmanned mobile vehicle for detecting an indoor environment, comprising:
obtaining first motion information using a LiDAR sensor (270) provided on the unmanned mobile vehicle (Scherzinger teaches at least in Fig.2A and paragraphs [0038] and [0096] “A range image can be generated by a range image (RI) sensor at a given acquisition time or over a given time window. Examples of RI sensors include a 3D scanning LiDAR, a 3D imaging radar, or a stereo camera array with range image generation capability. The RI sensor can be free-running or triggered. A free running RI sensor can have a RI data record output frequency set by an internal clock or the RI sensor mechanization. A triggered RI sensor can generate a RI data record when triggered by a command message or signal” and “The RI sensor(s) 270 can generate a RI data record per data capture epoch. A triggered RI sensor that generates an RI data record at a specified measurement construction time can be assumed without loss of generality to simplify RI data time alignment with measurement construction times”;
obtaining second motion information using an inertial sensor (240) provided on the unmanned mobile vehicle (Scherzinger teaches at least in Fig.2A and paragraphs [0004] and [0104] “The motion sensors and rotation sensors may be referred to as an inertial measure unit (IMU). The IMU can include a three-axis accelerometer and a three-axis gyroscope attached to the object, for measuring its specific forces (linear accelerations plus gravitational force and Coriolis force) along three orthogonal axes and its angular rates around three orthogonal rotational axes” and “can construct measurements from data obtained from the IMU 240”);
performing correction on the first motion information and the second motion information on the basis of error models corresponding to the LiDAR sensor (270) and the inertial sensor (240) (Scherzinger teaches at least in Fig.2A and paragraphs [0081] and [0101] “The following inertial sensor errors can be modeled in an AINS estimator using scalar stochastic process models such as a Gauss-Markov or random walk model: accelerometer biases, accelerometer scale errors, accelerometer triad orthogonality errors,; gyro biases, gyro scale errors, gyro triad orthogonality errors, and the like” and “The AINS estimator 220 can perform, for example, state initializations, time updates and measurement updates on receipt of the RFM data, and the like. The state vector in the AINS estimator 220 can include the following elements related to RI-AINS: (i) RI sensor errors; (ii) RI sensor installation parameter errors; and (iii) map position and orientation errors”; and
determining final position information of the unmanned mobile vehicle on the basis of the correction (Scherzinger teaches at least in Fig.2A and paragraphs [0004] “The output manager 250 is similar to the output manager 150 in FIG. 1 as described above. It can combine the INS solution provided by the INS 230 and the INS solution statistics computed by the AINS estimator 220, and output an absolute pose estimate, including position and orientation”).
In Reference to Claim 2
The method of claim 1 (see rejection to claim 1 above), wherein the obtaining of the first motion information using the LiDAR sensor (270) further includes:
obtaining first point information on a surrounding environment; in response to the obtaining of the first point information, obtaining second point information on the surrounding environment after a first cycle; and determining motion information corresponding to a minimum error between the first point information and the second point information as the first motion information on the basis of an iterative closest point (ICP) algorithm (Scherzinger teaches at least in Fig.2A and paragraphs [0029] “One method of range image registration is the iterative closest point (ICP) method. (See, e.g., Chen and Medioni, Object modeling by registration of multiple range images, Proceedings of the 1991 IEEE International Conference on Robotics and Automation, pp. 2724-2729.) The ICP method can estimate a six degree of freedom (6 DOF) transformation, which includes a 3D translation and a 3D rotation, that brings one range image into alignment with another range image. The ICP method can minimize a registration error cost that is the sum of the distances squared between transformed points in one range image and their nearest neighbors in the other range image. The registration error cost can be minimized by numerical optimization if the optimal rotation is large or by using a closed form solution if the rotation component of the optimal transformation is small. The closed form solution can include a least squares adjustment that results from setting the registration error cost gradient with respect to the transformation components to zero”).
In Reference to Claim 3
The method of claim 2 (see rejection to claim 2 above), wherein the obtaining of the second motion information using the inertial sensor (240) further includes:
obtaining one or more pieces of velocity information corresponding to a movement of the unmanned mobile vehicle every second cycle; identifying velocity information corresponding to the first cycle from among the one or more pieces of velocity information; and generating the second motion information on the basis of the velocity information corresponding to the first cycle (Scherzinger teaches at least in Figs.1 and 2A and paragraphs [0004] and [0076] “An inertial navigation system (INS) is a navigation device that uses motion sensors and rotation sensors to continuously calculate the pose and the velocity of a moving object. The term “pose” may refer to the position and orientation of the object. The term “velocity” may refer to the speed and direction of movement of the object. The motion sensors and rotation sensors may be referred to as an inertial measure unit (IMU). The IMU can include a three-axis accelerometer and a three-axis gyroscope attached to the object, for measuring its specific forces (linear accelerations plus gravitational force and Coriolis force) along three orthogonal axes and its angular rates around three orthogonal rotational axes, respectively. The INS can calculate a solution of the current position and the current orientation of the object by using a previous solution and advancing the previous solution based on integrating estimated velocities (including linear velocity and angular velocity) over elapsed time. This process can be referred to as dead reckoning” and “FIG. 1A shows an exemplary block diagram of an aided INS (AINS). The inertial measure unit (IMU) 140 can measure the platform specific force (gravitational force plus Coriolis force and platform accelerations) vector and angular rate vector, both about its orthogonal input axes. For example, the IMU 140 can include three accelerometers for measuring linear specific force in three orthogonal directions, respectfully, and three gyroscopes for measuring angular rates around three orthogonal axes, respectfully. The three accelerometers can be referred to as a three-axis accelerometer. The three gyroscopes can be referred to as a three-axis gyroscope. The IMU 140 can output data records at a relatively high sampling rate, for example at 50 to 1000 samples per second. The IMU 140 can output time-sampled records containing: (i) specific force vector and angular rate vector, or (ii) incremental velocity vector and incremental angle vector. The incremental velocity vector is the specific force vector integrated over the sampling interval. The incremental angle vector is the angular rate vector integrated over the sampling interval. In some embodiments, the incremental velocity vector and the incremental angle vector may be preferred in an INS, because the INS solution may not suffer from accumulated quantization noise”).
In Reference to Claim 7
An apparatus of an unmanned mobile vehicle for detecting an indoor environment, comprising:
a transmission and reception unit (unit that receives and transmission of data from RI sensor (270, 310) and IMU (240) to the computer) (see Scherzinger in Figs.2A-3 and paragraphs 95-96 and 103); and
at least one control unit (Scherzinger teaches at least in Figs.2A, 3 and paragraph [0125] “a computer or embedded processor can implement as part of a RI-AINS”) operably connected to the transmission and reception unit,
wherein the at least one control unit (computer or processor) is configured to obtain first motion information using a LiDAR sensor (270) provided on the unmanned mobile vehicle (Scherzinger teaches at least in Fig.2A and paragraphs [0038] and [0096] “A range image can be generated by a range image (RI) sensor at a given acquisition time or over a given time window. Examples of RI sensors include a 3D scanning LiDAR, a 3D imaging radar, or a stereo camera array with range image generation capability. The RI sensor can be free-running or triggered. A free running RI sensor can have a RI data record output frequency set by an internal clock or the RI sensor mechanization. A triggered RI sensor can generate a RI data record when triggered by a command message or signal” and “The RI sensor(s) 270 can generate a RI data record per data capture epoch. A triggered RI sensor that generates an RI data record at a specified measurement construction time can be assumed without loss of generality to simplify RI data time alignment with measurement construction times”;
obtain second motion information using an inertial sensor (240) provided on the unmanned mobile vehicle (Scherzinger teaches at least in Fig.2A and paragraphs [0004] and [0104] “The motion sensors and rotation sensors may be referred to as an inertial measure unit (IMU). The IMU can include a three-axis accelerometer and a three-axis gyroscope attached to the object, for measuring its specific forces (linear accelerations plus gravitational force and Coriolis force) along three orthogonal axes and its angular rates around three orthogonal rotational axes” and “can construct measurements from data obtained from the IMU 240”);
perform correction on the first motion information and the second motion information on the basis of error models corresponding to the LiDAR sensor (270) and the inertial sensor (240) (Scherzinger teaches at least in Fig.2A and paragraphs [0081] and [0101] “The following inertial sensor errors can be modeled in an AINS estimator using scalar stochastic process models such as a Gauss-Markov or random walk model: accelerometer biases, accelerometer scale errors, accelerometer triad orthogonality errors,; gyro biases, gyro scale errors, gyro triad orthogonality errors, and the like” and “The AINS estimator 220 can perform, for example, state initializations, time updates and measurement updates on receipt of the RFM data, and the like. The state vector in the AINS estimator 220 can include the following elements related to RI-AINS: (i) RI sensor errors; (ii) RI sensor installation parameter errors; and (iii) map position and orientation errors”; and
determine final position information of the unmanned mobile vehicle on the basis of the correction (Scherzinger teaches at least in Fig.2A and paragraphs [0004] “The output manager 250 is similar to the output manager 150 in FIG. 1 as described above. It can combine the INS solution provided by the INS 230 and the INS solution statistics computed by the AINS estimator 220, and output an absolute pose estimate, including position and orientation”).
In Reference to Claim 8
The apparatus of claim 7 (see rejection to claim 7 above), wherein, in order to obtain the first motion information using the LiDAR sensor (270), the at least one control unit (computer or processor) is further configured to obtain first point information on a surrounding environment; in response to the obtaining of the first point information, obtaining second point information on the surrounding environment after a first cycle; and determine motion information corresponding to a minimum error between the first point information and the second point information as the first motion information on the basis of an iterative closest point (ICP) algorithm (Scherzinger teaches at least in Fig.2A and paragraphs [0029] “One method of range image registration is the iterative closest point (ICP) method. (See, e.g., Chen and Medioni, Object modeling by registration of multiple range images, Proceedings of the 1991 IEEE International Conference on Robotics and Automation, pp. 2724-2729.) The ICP method can estimate a six degree of freedom (6 DOF) transformation, which includes a 3D translation and a 3D rotation, that brings one range image into alignment with another range image. The ICP method can minimize a registration error cost that is the sum of the distances squared between transformed points in one range image and their nearest neighbors in the other range image. The registration error cost can be minimized by numerical optimization if the optimal rotation is large or by using a closed form solution if the rotation component of the optimal transformation is small. The closed form solution can include a least squares adjustment that results from setting the registration error cost gradient with respect to the transformation components to zero”).
In Reference to Claim 9
The apparatus of claim 8 (see rejection to claim 8 above),
wherein, in order to obtain the second motion information using the inertial sensor (240), the at least one control unit (computer or processor) is further configured to obtain of the second motion information using the inertial sensor (240) further includes: obtain one or more pieces of velocity information corresponding to a movement of the unmanned mobile vehicle every second cycle; identify velocity information corresponding to the first cycle from among the one or more pieces of velocity information; and generate the second motion information on the basis of the velocity information corresponding to the first cycle (Scherzinger teaches at least in Figs.1 and 2A and paragraphs [0004] and [0076] “An inertial navigation system (INS) is a navigation device that uses motion sensors and rotation sensors to continuously calculate the pose and the velocity of a moving object. The term “pose” may refer to the position and orientation of the object. The term “velocity” may refer to the speed and direction of movement of the object. The motion sensors and rotation sensors may be referred to as an inertial measure unit (IMU). The IMU can include a three-axis accelerometer and a three-axis gyroscope attached to the object, for measuring its specific forces (linear accelerations plus gravitational force and Coriolis force) along three orthogonal axes and its angular rates around three orthogonal rotational axes, respectively. The INS can calculate a solution of the current position and the current orientation of the object by using a previous solution and advancing the previous solution based on integrating estimated velocities (including linear velocity and angular velocity) over elapsed time. This process can be referred to as dead reckoning” and “FIG. 1A shows an exemplary block diagram of an aided INS (AINS). The inertial measure unit (IMU) 140 can measure the platform specific force (gravitational force plus Coriolis force and platform accelerations) vector and angular rate vector, both about its orthogonal input axes. For example, the IMU 140 can include three accelerometers for measuring linear specific force in three orthogonal directions, respectfully, and three gyroscopes for measuring angular rates around three orthogonal axes, respectfully. The three accelerometers can be referred to as a three-axis accelerometer. The three gyroscopes can be referred to as a three-axis gyroscope. The IMU 140 can output data records at a relatively high sampling rate, for example at 50 to 1000 samples per second. The IMU 140 can output time-sampled records containing: (i) specific force vector and angular rate vector, or (ii) incremental velocity vector and incremental angle vector. The incremental velocity vector is the specific force vector integrated over the sampling interval. The incremental angle vector is the angular rate vector integrated over the sampling interval. In some embodiments, the incremental velocity vector and the incremental angle vector may be preferred in an INS, because the INS solution may not suffer from accumulated quantization noise”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 4 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Scherzinger in view of Pub No. US 2025/0067875 A1 to Sawahashi et. al. (Sawahashi).
In Reference to Claim 4
Scherzinger teaches (except for the bolded and italic recitations below):
The method of claim 1 (see rejection to claim 1 above), further comprising:
obtaining a plurality of pieces of point information using the LiDAR sensor (270) (Scherzinger teaches at least in Figs.1 and 2A and paragraphs [0096] “The RI sensor(s) 270 can generate a RI data record per data capture epoch. A triggered RI sensor that generates an RI data record at a specified measurement construction time can be assumed without loss of generality to simplify RI data time alignment with measurement construction time”);
identifying one or more pieces of point information corresponding to a predetermined height range from among the plurality of pieces of point information on the basis of the final position information;
identifying an area to which the one or more pieces of point information belong as an obstacle in a two-dimensional (2D) grid map; and
generating a local map on the basis of the 2D grid map and the obstacle.
Scherzinger is silent (bolded and italic recitations above) as to identifying one or more pieces of point information corresponding to a predetermined height range from among the plurality of pieces of point information on the basis of the final position information; identifying an area to which the one or more pieces of point information belong as an obstacle in a two-dimensional (2D) grid map; and generating a local map on the basis of the 2D grid map and the obstacle.
However, it is known in the art before the effective filing date of the claimed invention to identifying one or more pieces of point information corresponding to a predetermined height range from among the plurality of pieces of point information on the basis of the final position information; identifying an area to which the one or more pieces of point information belong as an obstacle in a two-dimensional (2D) grid map; and generating a local map on the basis of the 2D grid map and the obstacle. For example, Sawahashi teaches to identifying one or more pieces of point information corresponding to a predetermined height range from among the plurality of pieces of point information on the basis of the final position information; identifying an area to which the one or more pieces of point information belong as an obstacle in a two-dimensional (2D) grid map; and generating a local map on the basis of the 2D grid map and the obstacle. Sawahashi further teaches that performing such steps provide high precision map (see at least Sawahashi Figs. 1-6 and 16 and paragraphs 41-47, 63-64, 110-112, 124 and 167). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Scherzinger to perform the steps of identifying one or more pieces of point information corresponding to a predetermined height range from among the plurality of pieces of point information on the basis of the final position information; identifying an area to which the one or more pieces of point information belong as an obstacle in a two-dimensional (2D) grid map; and generating a local map on the basis of the 2D grid map and the obstacle as taught by Sawahashi in order to provide high precision map.
In Reference to Claim 10
Scherzinger teaches (except for the bolded and italic recitations below):
The apparatus of claim 7 (see rejection to claim 7 above), wherein the at least one control unit is further configured to
obtain a plurality of pieces of point information using the LiDAR sensor (270) (Scherzinger teaches at least in Figs.1 and 2A and paragraphs [0096] “The RI sensor(s) 270 can generate a RI data record per data capture epoch. A triggered RI sensor that generates an RI data record at a specified measurement construction time can be assumed without loss of generality to simplify RI data time alignment with measurement construction time”);
identify one or more pieces of point information corresponding to a predetermined height range from among the plurality of pieces of point information on the basis of the final position information;
identify an area to which the one or more pieces of point information belong as an obstacle in a two-dimensional (2D) grid map; and
generat a local map on the basis of the 2D grid map and the obstacle.
Scherzinger is silent (bolded and italic recitations above) as to identify one or more pieces of point information corresponding to a predetermined height range from among the plurality of pieces of point information on the basis of the final position information; identify an area to which the one or more pieces of point information belong as an obstacle in a two-dimensional (2D) grid map; and generate a local map on the basis of the 2D grid map and the obstacle.
However, it is known in the art before the effective filing date of the claimed invention to identify one or more pieces of point information corresponding to a predetermined height range from among the plurality of pieces of point information on the basis of the final position information; identify an area to which the one or more pieces of point information belong as an obstacle in a two-dimensional (2D) grid map; and generate a local map on the basis of the 2D grid map and the obstacle. For example, Sawahashi teaches to identify one or more pieces of point information corresponding to a predetermined height range from among the plurality of pieces of point information on the basis of the final position information; identify an area to which the one or more pieces of point information belong as an obstacle in a two-dimensional (2D) grid map; and generate a local map on the basis of the 2D grid map and the obstacle. Sawahashi further teaches that performing such steps provide high precision map (see at least Sawahashi Figs. 1-6 and 16 and paragraphs 41-47, 63-64, 110-112, 124 and 167). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Scherzinger to perform the steps of identify one or more pieces of point information corresponding to a predetermined height range from among the plurality of pieces of point information on the basis of the final position information; identify an area to which the one or more pieces of point information belong as an obstacle in a two-dimensional (2D) grid map; and generate a local map on the basis of the 2D grid map and the obstacle as taught by Sawahashi in order to provide high precision map.
Claim(s) 5-6 and 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Scherzinger in view of Sawahashi further in view of Pub No. US 2023/0061444A1 to Hong et. al. (Hong).
In Reference to Claim 5
Scherzinger in view of Sawahashi teaches (except for the bolded and italic recitations below):
The method of claim 4 (see rejection to claim 4 above), further comprising identifying an unsearched area on the basis of the final position information and a position of the obstacle, wherein the identifying of the unsearched area is repeatedly performed according to a change in the final position information.
Scherzinger in view of Sawahashi does not teach (bolded and italic recitations above) as to identifying an unsearched area on the basis of the final position information and a position of the obstacle, wherein the identifying of the unsearched area is repeatedly performed according to a change in the final position information. However, it is known in the art before the effective filing date of the claimed invention to identifying an unsearched area on the basis of the final position information and a position of the obstacle, wherein the identifying of the unsearched area is repeatedly performed according to a change in the final position information. For example, Hong teaches to identifying an unsearched area on the basis of the final position information and a position of the obstacle, wherein the identifying of the unsearched area is repeatedly performed according to a change in the final position information. Hong further teaches that performing such step may reduce unnecessary movement and efficiently detect a travel destination (see at least Hong Figs. 1-8 and paragraphs 6, 25, 88 and 120-126). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Scherzinger in view of Sawahashi to perform the step of identifying an unsearched area on the basis of the final position information and a position of the obstacle, wherein the identifying of the unsearched area is repeatedly performed according to a change in the final position information as taught by Hong in order to reduce unnecessary movement and efficiently detect a travel destination.
In Reference to Claim 6
The method of claim 5 (see rejection to claim 5 above), wherein the identifying of the unsearched area further includes identifying a virtual line connecting the final position information and the position of the obstacle, and the unsearched area includes an area that is present on an opposite side of the final position information in a direction of the virtual line (see at least Hong Figs. 1-8 and paragraph [0088] and [0139] “The processor 120 may determine a moving path for moving from the first location, which is the current location of the cleaning robot 100, to the first unsearched area. In an embodiment of the disclosure, the processor 120 may obtain information about at least one via point, which is passed through in moving from the first location to the location of the first unsearched area, and perform path-planning by using the obtained information about the at least one via point, so as to optimize the moving path. In an embodiment of the disclosure, the processor 120 may optimize the moving path by merging or deleting at least one via point based on the shortest distance between the first location and the location of the first unsearched area and location information of an obstacle adjacent to the line indicating the shortest distance. An example of an embodiment in which the processor 120 establishes a path plan for optimizing a moving path will be described in detail with reference to FIGS. 8 and 9”).
In Reference to Claim 11
Scherzinger in view of Sawahashi teaches (except for the bolded and italic recitations below):
The apparatus of claim 10 (see rejection to claim 10 above), wherein the at least one control unit is further configured to identify an unsearched area on the basis of the final position information and a position of the obstacle, and the at least one control unit repeatedly performs the identification of the unsearched area according to a change in the final position information.
Scherzinger in view of Sawahashi does not teach (bolded and italic recitations above) as to identify an unsearched area on the basis of the final position information and a position of the obstacle, and the at least one control unit repeatedly performs the identification of the unsearched area according to a change in the final position information. However, it is known in the art before the effective filing date of the claimed invention to identify an unsearched area on the basis of the final position information and a position of the obstacle, and the at least one control unit repeatedly performs the identification of the unsearched area according to a change in the final position information. For example, Hong teaches to identify an unsearched area on the basis of the final position information and a position of the obstacle, and the at least one control unit repeatedly performs the identification of the unsearched area according to a change in the final position information. Hong further teaches that performing such step may reduce unnecessary movement and efficiently detect a travel destination (see at least Hong Figs. 1-8 and paragraphs 6, 25, 88 and 120-126). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Scherzinger in view of Sawahashi to perform the step of identify an unsearched area on the basis of the final position information and a position of the obstacle, and the at least one control unit repeatedly performs the identification of the unsearched area according to a change in the final position information as taught by Hong in order to reduce unnecessary movement and efficiently detect a travel destination.
In Reference to Claim 12
The apparatus of claim 11 (see rejection to claim 11 above), wherein, in order to identify the unsearched area, the at least one control unit is further configured to identify a virtual line connecting the final position information and the position of the obstacle, and the unsearched area includes an area that is present on an opposite side of the final position information in a direction of the virtual line (see at least Hong Figs. 1-8 and paragraph [0088] and [0139] “The processor 120 may determine a moving path for moving from the first location, which is the current location of the cleaning robot 100, to the first unsearched area. In an embodiment of the disclosure, the processor 120 may obtain information about at least one via point, which is passed through in moving from the first location to the location of the first unsearched area, and perform path-planning by using the obtained information about the at least one via point, so as to optimize the moving path. In an embodiment of the disclosure, the processor 120 may optimize the moving path by merging or deleting at least one via point based on the shortest distance between the first location and the location of the first unsearched area and location information of an obstacle adjacent to the line indicating the shortest distance. An example of an embodiment in which the processor 120 establishes a path plan for optimizing a moving path will be described in detail with reference to FIGS. 8 and 9”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Pub No. US 2021/0278848 A1 to An et. al. (An) teaches using Lidar and IMU to determine the location of the mobile vehicle.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRANDON DONGPA LEE whose telephone number is (571)270-3525. The examiner can normally be reached Monday - Friday, 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aniss Chad can be reached at (571) 270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRANDON D LEE/Primary Examiner, Art Unit 3662 January 24, 2026