DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-7 and 15-22 are pending and examined below. This action is in response to the claims filed 12/4/25.
Response to Amendment
Applicant’s arguments, see Applicant Remarks 35 U.S.C. § 101 filed on 12/4/25, regarding 35 U.S.C. § 101 rejections are persuasive in view of amendments filed 12/4/25. 35 U.S.C. § 101 rejections are withdrawn.
Applicant’s arguments, see Applicant Remarks Section II. A. filed on 12/4/25, regarding 35 USC § 102 rejections have been fully considered and are not found persuasive.
Regarding applicants remarks, pages 11-15, primarily asserts:
Mei does not show or suggest the features of claim 1 identified above, including using a collision signal from the top of the self-moving device as a trigger to acquire feature information, or establishing correspondence between suspension-position information (e.g., three-dimensional coordinates) and suspension-height information.
However, the identification of floating obstacle candidate in Mei corresponds to the claimed collision signal triggering the acquisition of further information and processing to determine if there truly is a floating obstacle. As taught in Mei, the identification of floating obstacle candidates alerts the vehicle to a potential collision with an obstacle ahead of the vehicle which then requires further analysis to determine if it s false positive or actually a floating obstacle. Just because Mei does not define the identification of a potential floating obstacle as a collision signal, it is a signal which identifies a potential collision ahead, therefore disclosing a collision signal to trigger the following data processing to determine the validity of the signal, as is done in the claims.
Further defining the collision signal to distinguish over the identification of potential obstacles as taught in Mei might overcome the rejection as written however additional explicit details are required to distinguish the BRI from the details disclosed in the prior art.
Therefore, the rejections are maintained and updated to address the amendments of 12/4/25.
The office advises the utilization of interviews in the process of creating the most efficient and constructive examination process in that the interpretations of the examiner and applicant of both the art of record as well as the present application are expressed in a mutual agreement in order to create and push forward the most productive discussions/amendments to move towards allowance.
Claim Rejections - 35 USC § 102
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-7 and 15-22 are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by Mei et al. (US 2017/0185089).
Regarding claims 1, 15, and 16, Mei discloses an autonomous vehicle overhanging object detection system including an electronic device/non-transitory computer-readable storage medium storing a computer program/method, comprising: one or more processors; and a storage apparatus, configured to store one or more programs, wherein, when the one or more programs are executed by the one or more processors, an obstacle avoidance method for a self-walking device is performed, comprising (Abstract and ¶86-87):
acquiring a collision signal at a top of a self-walking device when the self-walking device walks along a current travel route (¶58-61, ¶78-82, and Fig. 2 – identifying floating obstacle candidates from a forward portion of the external environment corresponding to the recited collision signal at a top of a self-walking device along a travel route);
acquiring current feature information of a surrounding obstacle in response to the collision signal, the current feature information comprising suspension height information of a suspending obstacle over the top of the self-walking device (¶42-48 – floating obstacle candidate analysis includes acquiring current feature information including if it is spaced from the ground in the substantially vertical direction corresponding to the recited current feature information of a surrounding obstacle comprising suspension height information in response to identifying floating obstacle candidates corresponding to the recited collision signal);
wherein the feature information comprises a correspondence relationship between suspension position information and suspension height information, and the suspension position information indicates coordinate information of a detected suspending obstacle in a three-dimensional space (¶27, ¶39-44, and ¶71 – object data includes positioning data relative to boundary information in both the lateral direction corresponding to the recited suspension position information as well as height corresponding to the recited suspension height information where the sensor data for identified objects is mapped to a 3D coordinate system);
acquiring historical feature information of a region in which a current position is located (¶19 – map data includes information or data on roads, traffic control devices, road markings, structures, features, and/or landmarks in the one or more geographic areas corresponding to the recited historical feature information of a region in which a current position is located); and
re-planning a travel route based on the current feature information, the historical feature information, and preset feature information of a machine body profile, and controlling the self-walking device to move based on the travel route to avoid the suspending obstacle (¶42, ¶58-61, ¶78-83, and Fig. 2 – element 230 corresponding to the recited re-planning a travel route to avoid the suspending obstacle based on height clearance between the autonomous vehicle 100 and the remaining one or more floating obstacle candidates corresponding to the recited current feature information and preset feature information of a machine body profile as well as utilizing map data corresponding to the recited historical feature information where the replanned travel route is implemented based on the above determined information).
Regarding claims 2 and 17, Mei further discloses wherein the acquiring the current feature information of the surrounding obstacle comprises (¶42-48 – floating obstacle candidate analysis includes acquiring position information):
acquiring the current feature information of the surrounding obstacle by a structured light assembly arranged on the self-walking device (¶42-48 – floating obstacle candidate analysis includes acquiring position information where obstacle candidates are determined using LIDAR sensor data corresponding to the recited structured light assembly).
Regarding claims 3 and 18, Mei further discloses the feature information of the machine body profile comprises size information of the machine body profile (¶43 and ¶50 – height boundary as well as fully defined shape of the vehicle corresponding to the recited size information of the machine body profile); and
the re-planning the travel route based on the current feature information, the historical feature information, and the preset feature information of the machine body profile comprises (¶42, ¶58-61, ¶78-82, and Fig. 2 – element 230 corresponding to the recited re-planning a travel route to avoid the suspending obstacle based on height clearance between the autonomous vehicle 100 and the remaining one or more floating obstacle candidates corresponding to the recited current feature information and preset feature information of a machine body profile as well as utilizing map data corresponding to the recited historical feature information):
generating the newest feature information by updating the historical feature information based on the current feature information (¶42 – filtering object data corresponding to the recited newest feature information by updating map data corresponding to the recited historical feature information based on object positioning spacing from the ground in the substantially vertical direction corresponding to the recited current feature information); and
re-planning the travel route based on the newest feature information and the size information (¶50, ¶58-61, and Fig. 2 – determining a driving maneuver based on filtered object data corresponding to the recited newest feature information and height clearance between the autonomous vehicle 100 and the remaining one or more floating obstacle candidates after the floating obstacle candidates are filtered out where the height clearance is based on the defined shape of the autonomous vehicle corresponding to the recited size information).
Regarding claims 4 and 19, Mei further discloses the generating the newest feature information by updating the historical feature information based on the current feature information comprises updating historical suspension height information corresponding to the suspension position information based on current suspension height information (¶41-43 – filtering object data corresponding to the recited newest feature information by updating map data corresponding to the recited historical feature information based on object positioning spacing from the ground in the substantially vertical direction as well as relative to lateral boundary information corresponding to the recited current feature information comprising suspension position information based on current suspension height information).
Regarding claims 5 and 20, Mei further discloses the historical height information comprises historical elevation map information (¶19 – map data corresponding to the recited historical information includes terrain data such as elevation data in one or more geographic areas corresponding to the recited historic elevation map information);
the historical elevation map information comprises information of multiple adjacent unit regions and historical elevation information corresponding to the information of each unit region, wherein the information of each unit region is associated with the suspension position information, and the information of all the unit regions constitute information of a plane region associated with preset task information (¶19 and ¶39-43 – map data including elevation data corresponding to the recited historical elevation map information includes one or more predefined boundaries dividing the maps into multiple adjacent unit regions with associated map data which is compared to the object data including suspension position information where the map data including the associated predefined boundaries is associated with the travel paths for the autonomous vehicle corresponding to the recited preset task information); and
the updating the historical suspension height information corresponding to the suspension position information based on the current suspension height information comprises (¶41-43 – filtering object data corresponding to the recited newest feature information by updating map data corresponding to the recited historical feature information based on object positioning spacing from the ground in the substantially vertical direction as well as relative to lateral boundary information corresponding to the recited current feature information comprising suspension position information based on current suspension height information):
performing a classification operation based on the information of unit regions to acquire the suspension position information of a respective category (¶73-78 – floating obstacle candidates can be classified as floating obstacles, overhanging objects, or other categories based on predefined parameters including the recited suspension position information);
determining first suspension height information of the respective category based on the current suspension height information corresponding to the suspension position information of each category; and updating the historical suspension height information for the information of unit regions of the respective category based on the first suspension height information of each category (¶41-43 and ¶73-78 – determining the acquired object data including the height information relative to the lateral boundary information corresponding to the recited current feature information compared to the respective predefined parameters to determine the classification of the floating object candidate corresponding to the recited the suspension position information of each category to filter out false positives corresponding to the recited updating the historical suspension height information for the information of unit regions of the respective category based on the first suspension height information of each category).
Regarding claims 6 and 21, Mei further discloses wherein the determining the first suspension height information of the respective category based on the current suspension height information corresponding to the suspension position information of each category comprises: acquiring first suspension height information of the respective category with the minimum value based on the current suspension height information corresponding to the suspension position information of each category (¶41-43 and ¶73-78 – determining the acquired object data including the height information relative to the lateral boundary information corresponding to the recited current feature information compared to the respective predefined parameters to determine the classification of the floating object candidate where the predefined parameter thresholds corresponding to the recited minimum value for each respective category).
Regarding claims 7 and 22, Mei further discloses after acquiring the current feature information of the surrounding obstacle, marking the current feature information of a detected suspending obstacle (¶78 - the remaining floating object candidates can be classified as an overhanging objects to determine whether any driving maneuvers are need to avoid a collision with the overhanging object corresponding to the recited marking the current feature information of a detected suspended obstacle).
Additional References Cited
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Watts et al. (US 9,746,852) discloses a robotic navigation system including utilizing sensor data to detect and avoid overhanging obstacles (Col 20:1-18).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Matthew J Reda whose telephone number is (408)918-7573. The examiner can normally be reached Monday - Friday 7-4 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hunter Lonsberry can be reached at (571) 272-7298. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MATTHEW J. REDA/ Primary Examiner, Art Unit 3665