DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Summary
The Amendment filed on 2 December 2025 has been acknowledged.
Claims 1, 9 – 10 and 16 have been amended.
Claim 2 has been cancelled
Currently, claims 1, 3 – 4 and 6 – 16 are pending and considered as set forth.
Response to Arguments
Regarding 35 USC 103 rejection,
Applicant’s arguments with respect to claims 1 and 9 – 10 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The amendment of independent claims 1 and 9 – 10 incorporates partial limitations from cancelled claim 2 but the language has been modified, hence the interpretation has been changed and new ground of rejection has been applied.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 – 4, and 6 – 16 are rejected under 35 U.S.C. 103 as being unpatentable over Rikoski (US 12/0281507 A1) in view of Agarwal et al. (Hereinafter Agarwal) (WO 2018/089703 A1) and in further view of Trehard et al. (Hereinafter Trehard) (US 2017/0261996 A1).
As per claim 1, Rikoski discloses a method comprising:
performing simultaneous localization and mapping of an object in an indoor environment (See at least paragraph 162 and 168; the vehicle may include a robotic vehicle for traversing an indoor terrain … FIG. 12 depicts a process for simultaneous localization and mapping (SLAM) using real aperture sonar images) comprising:
transmitting a polling signal in the indoor environment using a transmitter of the object (See at least Paragraph 46; the system 100 includes a sonar unit 110 for transmitting and receiving acoustic signals. The sonar unit includes a transducer array 112 having a one or more transmitting elements or projectors and a plurality of receiving elements arranged in a row. Paragraph 162; the vehicle could be a robotic vehicle for traversing an indoor terrain. The Examiner construes the polling signal is a signal emitted by a sonar sensor according to specification (third paragraph of page 8) of the current application);
receiving a response signal a form of reflection of the transmitted polling signal using a receiver of the object (See at least Paragraph 39 and 46; FIG. 1 is a block diagram depicting a sonar mapping and navigation system 100, according to an illustrative embodiment of the present disclosure. The system 100 includes a sonar unit 110 for sending and receiving sonar signals, a preprocessor 120 for conditioning a received (or reflected) signal, and a matched filter 130 for performing pulse compression and beamforming.);
discriminating a position of an obstacle on the basis of information relating to the detected movement from among a plurality of positions provided on the basis of the received response signal (See at least paragraph 39, 48 – 52; The signals received by the receiver 114 are sent to a preprocessor for conditioning and compensation. Specifically, the preprocessor 120 includes a filter conditioner 122 for eliminating outlier values and for estimating and compensating for hydrophone variations. The preprocessor further includes a Doppler compensator 124 for estimating and compensating for the motion of the vehicle. The preprocessed signals are sent to a matched filter 130. The matched filter 130 includes a pulse compressor 132 for performing matched filtering in range, and a beamformer 134 for performing matched filtering in azimuth and thereby perform direction estimation. The signal corrector 140 includes a grazing angle compensator 142 for adjusting sonar images to compensate for differences in grazing angle. Typically, if a sonar images a collection of point scatterers the image varies with observation angle. For example, a SAS system operating at a fixed altitude and heading observing a sea floor path will produce different images at different ranges. Similarly, SAS images made at a fixed horizontal range would change if altitude were varied. In such cases, changes in the image would be due to changes in the grazing angle. The grazing angle compensator 142 is configured to generate grazing angle invariant images. One such grazing angle compensator is described in U.S. patent application Ser. No. 12/802,454 titled "Apparatus and Method for Grazing Angle Independent Signal Detection," the contents of which are incorporated herein by reference in their entirety. The signal corrector 140 includes a phase error corrector 144 for correcting range varying phase errors. The phase error corrector 144 may correct for phase error using a technique described with reference to FIG. 7. Generally, the phase error corrector 144 breaks the image up into smaller pieces, each piece having a substantially constant phase error. Then, the phase error may be estimated and corrected for each of the smaller pieces. The system 100 further includes a signal detector 150 having a signal correlator 152 and a storage 154. The signal detector 150 may be configured to detect potential targets, estimate the position and velocity of a detected object and perform target or pattern recognition. In one embodiment, the storage 154 may include a map store, which may contain one or more previously obtained SAS images real aperture images or any other suitable sonar image); and
navigating the object through the indoor environment using a navigating device based on the generated map (See at least paragraph 10 and 15; Both sidescan and SAS technologies have been used for map-based navigation systems. Sidescan sonar images have been incoherently processed using template matching and spatial constraints to provide navigational information and recognize mine-like objects … the use of orthogonal signals for SAS, overpinging with multiple transmitters, and holographic simultaneous localization and mapping (SLAM).).
Although Rikoski teaches aspect of localization and mapping device and detecting movement of object and the generation of the map and detection of the movement being performed jointly (See at least paragraph 15 for SLAM and paragraph 37 - 38 for mapping and navigating of the robot which tracks location of the robot and localizing and mapping while robot is navigating the terrain) . However, Rikoski does not explicitly teaches limitation of:
generating a map of the indoor using a localization and mapping device based on the response signal and movement of object, the generation of the map and detection of the movement being performed jointly.
Agarwal teaches limitations of: generating a map of the indoor using a localization and mapping device based on the response signal and movement of object, the generation of the map and detection of the movement being performed jointly (See at least paragraph 4, 43 and 46; When a robot is not given a priori knowledge of its environment, it must use its sensory data and actions to concurrently build a map of its environment and localize itself within its stochastic map, this is referred to as Simultaneous Localization and Mapping (SLAM). In SLAM, estimation errors tend to build up during exploratory motion and are usually negated by revisiting previously seen locations. … The exteroceptive sensor (i.e.,. a range bearing sensor) may include a camera, a LIDAR system, a RADAR system, a SONAR system, or other system for providing distance and bearing measurements to features in view of the vehicle. For example, features outdoors may include stars when a star tracker is used; and features indoors may include light fixtures or other fixtures on the ceiling. … a method for long term SLAM with absolute orientation sensing 200. The embodiment may be implemented by a mapping system. ... A scan matcher 240 may determine a relative pose based on input from a range finding sensor 220 and/or a movement sensor 230. Range finding sensor 220 may be a LiDAR, sonar, radar or some other sensor configured to detect objects and their range from the vehicle. Movement sensor 230 may an IMU, a wheel encoder, or any other sensor configured to detect movement of the vehicle and estimate a distance traveled based on the movement detected).
Agarwal also teaches limitations of transmitting a polling signal in the indoor environment using a transmitter of the object (See at least paragraph 4 – 5; When a robot is not given a priori knowledge of its environment, it must use its sensory data and actions to concurrently build a map of its environment and localize itself within its stochastic map, this is referred to as Simultaneous Localization and Mapping (SLAM). In SLAM, estimation errors tend to build up during exploratory motion and are usually negated by revisiting previously seen locations. Some robots may be operated indoors where GPS is unavailable or degraded. For example, material handling robots that move goods (boxes, pallets etc.) in large warehouses and distribution centers do not have access to GPS satellites).
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention was made to modify a system and method of determining a location of robot and update the map of robot using SLAM technique and polling signal from sonar sensors of Rikoski, to include generating a map of the indoor environment and detecting movement of object based on the response of the signal using localization and mapping device, the generation of the map and detection of the movement being performed jointly as taught by Agarwal in order to determine a location of vehicle and control the vehicle (See at least paragraph 4 and 46).
The combination of Rikoski and Agarwal teaches the limitations of: wherein the generation of the map at a given time depends on information relating to the movement detected at the given time (Agarwal, see at least paragraph 43, (the SLAM process is performed using sensors over a time interval)) does not explicitly teaches the limitations of:
wherein the map generation at a given instant is a function of the detected movement at the given instant.
Trehard teaches the limitations of:
wherein the map generation at a given instant is a function of the detected movement at the given instant (See at least paragraph 91 and 93; On this subject, reference may be made to the paper “A real-time robust SLAM for large-scale outdoor environments” by J. Xie, F. Nashashibi, M. N. Parent and O. Garcia-Favrot, in ITS World Congr. 2010. … he combination module 18 then combines the information supplied by the sensor 2 at the instant t (represented by the masses M_SCAN.sub.t,i(A)) and the information constructed by the device 10 up to the instant t−1 (represented by the masses M_GRI.sub.t-1,k(A)), in order to deduce therefrom raw masses M_COMB.sub.t,k(A) representative of the state of knowledge of the fixed grid at the instant t, for example by applying to each cell k of the fixed grid a formula of the following type: …).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include wherein the map generation at a given instant is a function of the detected movement at the given instant as taught by Trehard in the system of Rikoski and Agarwal, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As per claim 2, the combination of Rikoski and Agarwal teaches element of:
wherein the generation of the map at a given time depends on information relating to the movement detected at the given time (Agarwal, see at least paragraph 43).
As per claim 3, the combination of Rikoski and Agarwal teach element of:
wherein generating the map comprises detecting the obstacle on the basis of a response signal (Agarwal, see at least paragraph 45 – 46).
As per claim 4, the combination of Rikoski and Agarwal teach element of:
wherein generating the map comprises filtering, in the received response signal, a part resulting from the polling signal (Rikoski, see at least paragraph 40, Agarwal, see at least paragraph 9).
As per claim 6, the combination of Rikoski and Agarwal does not explicitly teach element of:
wherein generating the map comprises weighting a detected obstacle by a probability when an obstacle detection detects a plurality of obstacles at a given time in the response signal, the probability associated with an obstacle being dependent, at the given time, on amplitude of the received response signal resulting from the obstacle and on the detected movement information.
Trehard teaches elements of:
wherein generating the map comprises weighting a detected obstacle by a probability when an obstacle detection detects a plurality of obstacles at a given time in the response signal, the probability associated with an obstacle being dependent, at the given time, on amplitude of the received response signal resulting from the obstacle and on the detected movement information (See at least paragraph 6 – 11, 20, 25 and 77).
As per claim 7, the combination of Rikoski, Agarwal and Trehard teaches element of:
wherein generating the map comprises dividing the map into a plurality of cells, each cell being associated with an obstacle of the plurality of obstacles (Trehard, see at least paragraph 45, and 50).
As per claim 8, the combination of Rikoski, Agarwal and Trehard teaches element of:
wherein generating the map comprises associating the obstacle having the highest probability with a cell of the map divided into the plurality of cells in response to a plurality of the obstacles being detected for the cell (Trehard, see at least paragraph 51 – 53 and 61).
As per claim 11, the combination of Rikoski and Agarwal teaches element of:
the localization and mapping device; and a computer configured to determine a path on the basis of a position of the object and the generated map provided by the localization and mapping device (Agarwal, see at least paragraph 46).
As per claim 12, the combination of Rikoski and Agarwal teaches element of:
Wherein the object comprises localization and mapping device (Rikoski, see at least paragraph 37)
As per claim 13, the combination of Rikoski and Agarwal teaches element of:
wherein the object comprises a screen capable of reproducing, in real time, the map generated by the localization and mapping device (Rikoski, see at least paragraph 57 and 170).
As per claim 14, the combination of Rikoski and Agarwal teaches element of:
wherein the object comprises the navigation device (Rikoski, see at least paragraph 38).
As per claim 15, the combination of Rikoski and Agarwal teaches element of:
wherein the object comprises a locomotor system including a controller configured to control at least one direction of the locomotor system on the basis of a path determined by the navigation device on the basis of a position of the object and the generated map provided by the localization and mapping device (Agarwal, see at least paragraph 46).
As per claim 16, the combination of Rikoski, Agarwal and Trehard teaches element of:
obtaining internal movement information indicating a direction of movement of the object using a sensor of the object (Agarwal, see at least paragraph 46); and
discriminating the position of the obstacle includes discriminating the position of the obstacle on the basis of the internal movement information (Trehard, see at least paragraph 7, and 34).
Regarding claims 9 – 10:
Claims 9 – 10 are rejected using the same rationale, mutatis mutandis, applied to claim 1 above, respectively.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IG T AN whose telephone number is (571)270-5110. The examiner can normally be reached M - F: 10:00AM- 4:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aniss Chad can be reached at (571) 270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
IG T AN
Primary Examiner
Art Unit 3662
/IG T AN/Primary Examiner, Art Unit 3662