DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Summary
This communication is a First Office Action Non-Final Rejection on the merits.
Claims 2 – 21 are currently pending and considered below.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 4 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 4 recites the limitations of: “the second distance being less than the first distance.” However, this limitation is not supported by the specification and it is rather described as opposite according to the specification paragraph 111 (The range 908 of distances may extend from a first distance 914 from the autonomous vehicle 402 to a second distance 912 from the autonomous vehicle 402). Clarification is required.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 4 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 4 recites the limitations of: “the second distance being less than the first distance.” However, it is not clear why the third location would be re-scanned if first location is further than the third location with same azimuth. Most of image sensors such as lidar, radar and camera would already have detected the object or the goal location as it read everything in line to the first location if the first location is further than third location. The specification (paragraph 111) rather describes expanding the scanning to further distance which makes the claim limitations even more unclear. Clarification is required. For the purpose of the Examination, the Examiner will construe that if first place is scanned using any of image sensors or TOF (time of flight) sensor, then checking first location already teach checking third location.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 2 – 7, and 10 – 12 are under 35 U.S.C. 103 as being unpatentable over Leach et al. (Hereinafter Leach) (US 2019/0092287) in view of Ko et al. (Hereinafter Ko) (US 2020/0379456).
As per claim 2, Leach teaches the limitations of:
a system for directing a field-of-view of a first sensor positioned on an autonomous vehicle, the system comprising:
at least one processor programmed to perform operations (See at least paragraph 5; The sensor control system includes a computing system comprising one or more processors and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations.) comprising:
selecting a first location, the first location being a first distance from the autonomous vehicle and having a first azimuth position (See at least paragraph 31 – 32 and 36; the autonomous vehicle can include a sensor alignment system that adjusts alignment of the one or more sensors. For example, the sensor alignment system can include a rotational assembly coupled to each sensor within a sensor system. The rotational assembly can be configured to mechanically adjust the physical position of the one or more sensors in one or more dimensions (e.g., a first dimension corresponding to a lateral direction, a second dimension corresponding to a longitudinal direction, and/or a third dimension corresponding to a normal/vertical direction). The physical position of the one or more sensors can be adjusted directly or indirectly by adjusting the physical position of a component on which a sensor can be mounted (e.g., a side-view mirror). The sensor control system can be configured to generate a control action signal indicative of a desired alignment at which the rotational assembly can position the one or more sensors. In some implementations, the sensor control system can include a sensor compensation system configured to determine a compensation factor for sensor data received from the one or more sensors. For example, when monitored parameters indicate a change in location of one or more reference objects, the sensor compensation system can determine an adjusted location of objects detected within sensor data. In another example, when monitored parameters indicate a change in the comparison of an estimated local motion parameter determined at least in part from object motion detected by a given sensor to an actual vehicle motion parameter, the sensor compensation system can determine an adjusted motion parameter. A compensation factor corresponding to one or more adjusted parameters can be particularly useful when a parameter change is indicative of misalignment of the one or more sensors. If alignment cannot be immediately or readily corrected, adjusting data currently obtained by a misaligned sensor can be a potentially temporary solution until alignment can be properly corrected by a service request or otherwise. … accessing data descriptive of one or more monitored parameters associated with sensor data, whereby the same data can be used to determine sensor misalignment and/or contamination as is used to determine the location of objects within the surrounding environment of the autonomous vehicle. By observing changes in parameters that are monitored as part of sensor performance in object detection and tracking for autonomous vehicle navigation, separate sensor monitoring algorithms or interruption of sensor data gathering for object detection and tracking may not be required. As such, a more streamlined and efficient solution for monitoring and controlling sensors within an autonomous vehicle sensor system can be realized.);
modifying the field-of-view of the first sensor to direct the first sensor towards the goal location (See at least paragraph 21 , 24, 26, and 31 – 32; o ensure proper operation of the sensors, the computing system of the sensor control system could then implement a control action such as automatically adjusting alignment of the first sensor (e.g., adjusting the mirror position and/or first sensor position such that the first sensor can obtain sensor data in which the reference objects have returned to their initial location). … In response, the computing system of the sensor control system could then implement a control action such as automatically adjusting alignment of the first sensor. … In some implementations, a computing system associated with a sensor control system can automatically adjust alignment of the one or more sensors when a change in the monitored parameter(s) associated with the one or more sensors is indicative of potential misalignment. In some implementations, a computing system associated with a sensor control system can determine a compensation factor for sensor data received from the one or more sensors (e.g., an adjusted location of objects detected within sensor data, an adjusted motion parameter derived from sensor data, etc.) … , the autonomous vehicle can include a sensor alignment system that adjusts alignment of the one or more sensors. For example, the sensor alignment system can include a rotational assembly coupled to each sensor within a sensor system. The rotational assembly can be configured to mechanically adjust the physical position of the one or more sensors in one or more dimensions (e.g., a first dimension corresponding to a lateral direction, a second dimension corresponding to a longitudinal direction, and/or a third dimension corresponding to a normal/vertical direction). The physical position of the one or more sensors can be adjusted directly or indirectly by adjusting the physical position of a component on which a sensor can be mounted (e.g., a side-view mirror). The sensor control system can be configured to generate a control action signal indicative of a desired alignment at which the rotational assembly can position the one or more sensors. In some implementations, the sensor control system can include a sensor compensation system configured to determine a compensation factor for sensor data received from the one or more sensors. For example, when monitored parameters indicate a change in location of one or more reference objects, the sensor compensation system can determine an adjusted location of objects detected within sensor data. In another example, when monitored parameters indicate a change in the comparison of an estimated local motion parameter determined at least in part from object motion detected by a given sensor to an actual vehicle motion parameter, the sensor compensation system can determine an adjusted motion parameter. A compensation factor corresponding to one or more adjusted parameters can be particularly useful when a parameter change is indicative of misalignment of the one or more sensors. If alignment cannot be immediately or readily corrected, adjusting data currently obtained by a misaligned sensor can be a potentially temporary solution until alignment can be properly corrected by a service request or otherwise.), but does not explicitly teach the limitations of:
determining that the first location fails to meet at least one goal condition;
after determining that the first location fails to meet the at least one goal condition, selecting a second location, the second location being the first distance from the autonomous vehicle and having a second azimuth position different than the first azimuth position;
determining that the second location fails to meet the at least one goal condition;
after determining that the second location fails to meet the at least one goal condition, determining that a goal location does meet the at least one goal condition, the goal location being different than the first location and the second location; and
controlling the autonomous vehicle based at least in part on an output of the first sensor after the modifying of the field-of-view of the first sensor.
Ko teaches the limitations of:
determining that the first location fails to meet at least one goal condition; after determining that the first location fails to meet the at least one goal condition, selecting a second location, the second location being the first distance from the autonomous vehicle and having a second azimuth position different than the first azimuth position; determining that the second location fails to meet the at least one goal condition; after determining that the second location fails to meet the at least one goal condition, determining that a goal location does meet the at least one goal condition, the goal location being different than the first location and the second location (See at least paragraph 14 and 73; the sensors can be repositioned to increase sensor coverage, provide instantaneous field of view, and target specific areas or objects. The sensors can also be repositioned to account for changes in the vehicle's motion, driving angles and direction, as well as relative changes in the vehicle's environment and the motion, angle, and position of surrounding objects. The dynamic and adaptable sensor repositioning herein can improve the sensors' visibility, accuracy, and detection capabilities. The sensor repositioning platform can allow autonomous vehicles to monitor their surroundings and obtain a robust understanding of their environment. Moreover, the sensor repositioning platform and associated functionality can provide significant benefits in cost, sensor data redundancy, and sensor fusion. … the different areas of interest can include an area along the different trajectory 508 which the autonomous vehicle 102 is crossing or plans to cross, and an area that the autonomous vehicle 102 needs to check for objects (e.g., oncoming/incoming vehicles, pedestrians, etc.) before or while the autonomous vehicle 102 travels in or towards the different trajectory 508 (e.g., before or while the autonomous vehicle 102 crosses a lane, makes a turn, makes a maneuver, changes direction, etc.). Other non-limiting examples of areas of interest that can be targeted through the repositioning of the payload carrier structures 220 can include an area where a certain object or condition is located that the autonomous vehicle 102 is tracking, a blind spot, an area for which the autonomous vehicle 102 wants to collect more sensor data (e.g., to gain greater insight or visibility into the area and/or the surrounding environment, to confirm that no safety hazards or approaching objects exist, etc.), an area for which the autonomous vehicle 102 wants to get new or additional sensor data, and/or any other area that may be of interest to the autonomous vehicle 102 for any reason (e.g., safety, navigation, visibility, localization, mapping, etc.). The Examiner construes that continually changing the target location for vehicle navigation trajectories while avoiding obstacle is equivalent to changing goal location as the goal location fail to meet criteria.); and
controlling the autonomous vehicle based at least in part on an output of the first sensor after the modifying of the field-of-view of the first sensor (See at least paragraph 14 and 36 – 37; the sensors can be repositioned to increase sensor coverage, provide instantaneous field of view, and target specific areas or objects. The sensors can also be repositioned to account for changes in the vehicle's motion, driving angles and direction, as well as relative changes in the vehicle's environment and the motion, angle, and position of surrounding objects. … In addition to providing the sensors 104-108 access to, and/or visibility into, the external or outside environment, as further described herein, the sensor positioning platform 200 can mechanically move, rotate, and/or reposition the payload 222 of sensors 104-108 to allow the sensors 104-108 to capture sensor data or measurements for different areas or regions of the outside environment, extend the addressable field of regard, extend and/or provide an instantaneous field of view, provide sensor visibility or access into a focused or specific area or object, account for different angles, account for different vehicle maneuvers, etc. The sensor data or measurements can be used to detect objects (e.g., other vehicles, obstacles, traffic signals, signs, etc.), humans, animals, conditions (e.g., weather conditions, visibility conditions, traffic conditions, road conditions, etc.), route or navigation conditions, and/or any other data or characteristics associated with the outside environment. The autonomous vehicle 102 can use the sensor data or measurements to perform (or when performing) one or more operations, such as mapping operations, tracking operations, navigation or steering operations, safety operations, braking operations, etc.).
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention was made to modify a system of sensors to receive data and using the data and the map and pose data to determine field of view position and adjusting field of view of sensor of Leach, to include determining that the first location fails to meet at least one goal condition; after determining that the first location fails to meet the at least one goal condition, selecting a second location, the second location being the first distance from the autonomous vehicle and having a second azimuth position different than the first azimuth position; determining that the second location fails to meet the at least one goal condition; after determining that the second location fails to meet the at least one goal condition, determining that a goal location does meet the at least one goal condition, the goal location being different than the first location and the second location; and controlling the autonomous vehicle based at least in part on an output of the first sensor after the modifying of the field-of-view of the first sensor as taught by Ko in order to detect the objects in the environment and avoid obstacles (paragraph 36 and 37).
As per claim 3, the combination of Leach and Ko teaches the limitations of:
selecting a third location, the third location being a second distance from the autonomous vehicle and having the first azimuth position; determining that the third location fails to meet the at least one goal condition; and after determining that the third location fails to meet the at least one goal condition, selecting a fourth location, the fourth location being the second distance from the autonomous vehicle and having the second azimuth position different than the first azimuth position, the fourth location being the goal location (Ko, see at least paragraph 14 and 73).
As per claim 4, as best understood by the Examiner, the combination of Leach and Ko teaches the limitations of:
the second distance being less than the first distance (Leach, see at least paragraph 20 – 21. The Examiner construes that since Leach teaches checking location of different distance, it would have been obvious to scan a third location with a second distance which is shorter than first distance).
As per claim 5, the combination of Leach and Ko teaches the limitations of:
the first location aligning with a vehicle direction (See at least paragraph 36).
As per claim 6, the combination of Leach and Ko teaches the limitations of:
selecting a third location, the first location being the first distance from the autonomous vehicle and having a third azimuth position, the second azimuth position being offset from the vehicle direction in a first direction and the third azimuth position being offset from the vehicle direction in a second direction opposite the first direction (Ko, see at least paragraph 14 and 73).
As per claim 7, the combination of Leach and Ko teaches the limitations of:
wherein the at least one goal condition comprises a condition that the goal location be within field-of-regard of the first sensor (Leach, see at least paragraph 20, 31 – 32 and 36).
As per claim 10, the combination of Leach and Ko teaches the limitations of:
wherein the at least one goal condition comprises a condition that the goal location be on a route of the autonomous vehicle (Leach, see at least paragraph 49).
As per claim 11, the combination of Leach and Ko teaches the limitations of:
wherein the at least one goal condition comprise a condition that the goal location be either on a route of the autonomous vehicle or on a travel way that leads to the route of the autonomous vehicle (Leach, see at least paragraph 49 and 81).
As per claim 12, the combination of Leach and Ko teaches the limitations of:
wherein the at least one goal condition comprises a condition that a difference between a direction of travel associated with the autonomous vehicle and a direction of travel at a portion of a travel way comprising the goal location be less than a threshold (Leach, see at least paragraph 20).
Claims 8 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Leach and Ko in view of Fischer et al. (Hereinafter Fischer) (US 2018/0021954).
As per claim 8, Leach and Ko disclose all the elements of the claimed invention but does not teach element of:
wherein the at least one goal condition comprises a condition that a line-of-sight from the position of the first sensor to the goal location is not occluded by any map objects.
Fischer teaches elements of:
wherein the at least one goal condition comprises a condition that a line-of-sight from the position of the first sensor to the goal location is not occluded by any map objects (See at least paragraph 123).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include wherein the at least one goal condition comprises a condition that a line-of-sight from the position of the first sensor to the goal location is not occluded by any map objects as taught by Fischer in the system of Leach and Ko, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As per claim 9, Leach, Ko and Fischer disclose wherein the at least one goal condition comprises a condition that the goal location is on a travel way at a common travel way level with a current travel way of the autonomous vehicle based at least in part on a position of the autonomous vehicle (Fischer, see at least abstract and paragraph 25).
Regarding claims 13 – 21:
Claims 13 – 21 are rejected using the same rationale, mutatis mutandis, applied to claims 1 – 12 above, respectively.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Ebrahimi Afrouzi et al. (US 2024/0310851 A1) discloses obstacle recognition method for autonomous robots.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IG T AN whose telephone number is (571)270-5110. The examiner can normally be reached M - F: 10:00AM- 4:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aniss Chad can be reached at (571) 270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
IG T AN
Primary Examiner
Art Unit 3662
/IG T AN/Primary Examiner, Art Unit 3662