DETAILED CORRESPONDENCE
This action is in response to the filing of the Continuation on 07/22/2024. Claims 1 – 20 canceled in the preliminary amendment. Claims 21 – 40 are new.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined
under the first inventor to file provisions of the AIA .
Double Patenting
The non-statutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper time wise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A non-statutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on non-statutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Independent Claims 21, 29 and 37 of the instant application are rejected on the grounds of non-statutory double patenting (anticipatory type), as being broader than Independent Claims 1, 8 and 15 of US. Patent No. US 12,071,156 B2.
Claims of instant application covered under the non-statutory DP are mapped to the claims or a claim limitations of US. Patent No. US 12,071,156 B2.
See Chart below:
Instant Application Claims
Patent No. US 12071156 B2
Claims 21, 29 and 37
Claims 1, 8 and 15
Claims 23, 31 and 39
Claims 2, 9, and 16
Claims 25 and 33
Claims 3, 10 and 18
Claims 26 and 34
Claims 4, 11 and 19
Claims 27 and 35
Claims 5, 12 and 17
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 21, 29 and 37 are rejected under 35 U.S.C. 103 as being unpatentable over in view of Browning (US 20180005407) and Silva (US 20200225669).
Claim 21, Browning discloses computer-implemented method comprising:
receiving a previously-generated surfel map representing a driving environment in which a vehicle is located, the surfel map comprising a plurality of surfels generated by a plurality of other vehicles obtaining sensor data in the environment, each surfel corresponding to a respective different location in the environment in which a vehicle is located; [see at least p0021 – p0022, p0038 and p0080 and also see Fleet vehicles in Fig 12 - storage system may be provided with a vehicle to store a collection of submaps that represent a geographic area where the vehicle may be driven. A programmatic interface may be provided to receive submaps and submap updates independently of other submaps; Still further, in some examples, a submap provides a data structure that can carry one or more data layers which fulfill a data consumption requirement of a vehicle when the vehicle is autonomously navigated through an area of a road segment. The data layers of the submap can include, or may be based on, sensor information collected from a same or different vehicle (or other source) which passed through the same area on one or more prior instances; Each of the stored submaps 105 may include data layers corresponding to multiple types of information about a corresponding road segment; According to some examples, the localization layer 306 may provide a three-dimensional point cloud of imagelets and/or surfels. Depending on the implementation, the imagelets or surfels may represent imagery captured through Lidar, stereoscopic cameras, a combination of two-dimensional cameras and depth sensors, or other three-dimensional sensing technology];
receiving, in real-time from one or more sensors, sensor data of the environment in which the vehicle is located; [see at least p0039 - to identify the likely submap for an initial location of the vehicle 10. In one implementation, the submap processing component 120 implements the start component 122 as a coarse or first-pass process to compare the submap feature set 113 of an initially retrieved submap against a current sensor state 493, as determined from one or more sensor interfaces or components of the vehicle's sensor system 492. The start logic 114 may perform the comparison to identify, for example, a current submap 145 of the initial set which contains the feature of a landmark detected as being present in the current sensor state 493 of the vehicle 10];
determining, based on comparing the representation of the environment in the surfel map to the sensor data received from the one or more sensors, that the environment in which the vehicle is located includes a change; [see at least p0045 and p0153 - the submap processing component 120 can include perception component 124 which provides perception output 129 representing objects that are detected (through analysis of the current sensor state 493) as being present in the area of the road network. The perception component 124 can determine the perception output to include, for example, a set of objects (e.g., dynamic objects, road features, etc.). In determining the perception output 129, the perception component 124 can compare detected objects from the current sensor state 493 with known and static objects identified with the submap feature set 113. The perception component 124 can generate the perception output 129 to identify (i) static objects which may be in the field of view, (ii) non-static objects which may be identified or tracked, (iii) an image representation of the area surrounding a vehicle with static objects removed or minimized, so that the remaining data of the current sensor state 493 is centric to dynamic objects; Similarly, the perceived geometric differential may reflect a difference in a geometric attribute (e.g., height, width, footprint or shape) of an object or feature that is depicted in the current image data 1043, as compared to the depiction of the object or feature with the corresponding imagelets of the point cloud 1035].
Browning, does not specifically disclose in response, generating an updated path for the vehicle to travel based on the change to the environment.
However, Silva discloses evaluating trajectories based on predicted occluded regions associated with the trajectories, where the predicted occluded regions are determined with respect to predicted locations of objects in an environment. Further teaching, the operation 122 can include determining an occlusion score for the predicted occluded region 126. For instance, an occlusion score can be based on one or more aspects or attributes of the predicted occluded region 126, where the occlusion score may be associated with a size of the predicted occluded region, an importance of an area occluded by the object along the particular trajectory being evaluated, map data, and the like; occlusion scores may be determined for a plurality of future times along the trajectory 114, and occlusion scores may be summed or aggregated to determine a trajectory score for the trajectory 114. Additionally, as time passes, the occlusion scores (and thus the trajectory scores) may be revised or otherwise updated as objects maintain or change trajectories and the environment surrounding the vehicle 112 changes. At operation 128, the process can include controlling the vehicle based at least in part on the predicted occluded region. For example, the trajectory scores for multiple possible trajectories can be compared to select a trajectory, which can be used to control the vehicle 112 to traverse the environment. The selected trajectory can be based on a highest trajectory score of the multiple trajectory scores, a lowest trajectory score of the multiple trajectory scores, a median trajectory score of the multiple trajectory scores, or any suitable selection. In at least some examples, a trajectory may be chosen to optimize the amount of data in an environment so as to ensure safe traversal by the vehicle.
It would have been obvious before the effective date of the claimed invention to one of ordinary skill in the art to modify the device in Browning to include in response, generating an updated path for the vehicle to travel based on the change to the environment, as suggested and taught by Silva with a reasonable expectation of success, for the purpose of providing ease and efficiency when selecting potential routes by automatically deciding in the best route based on the environmental issue.
Claim 29 is similarly rejected as Claim 21, see above.
Claim 37 is similarly rejected as Claim 21, see above.
Claims 22, 23, 24, 25, 30, 31, 32, 33, 38, 39 and 40 are rejected under 35 U.S.C. 103 in view of Browning (US 20180005407) and Silva (US 20200225669), further in view of ISHII (US 20180178800).
Claim 22, Browning discloses the method of claim 21, but is silent to wherein the change introduces an occlusion of a line of sight between a location of the vehicle and an area of interest, and wherein generating the updated path comprises generating a path that establishes a line of sight between one or more sensors of the vehicle and the area of interest.
However, ISHII discloses an information processing apparatus and the like capable of assisting safe driving of vehicles at places such as intersections where the view from the vehicles is obstructed or restricted are successfully implemented [see p0007].
Further providing, the blind spot determining unit 103 determines that the subsequent image is a blind spot image in which the object may be present at a blind spot of the traffic mirror. In other words, the blind spot determining unit 103 determines whether a moving object that has been seen in the traffic mirror is no longer seen therein in the consecutive time-series images and determines that the image in which the moving object is no longer seen is a blind spot image; The output processing unit 102 generates driving assist information for a vehicle on the basis of the size of the object calculated by the calculating unit 101 and outputs the generated driving assist information. The driving assist information is generated in accordance with change information regarding a change in size of the object in at least two consecutive time-series images and is output. If the change information indicates an increase in the size, the driving assist information includes at least one of information for causing the vehicle to decelerate and information for causing the vehicle to move in a direction to be away from the object. In addition, if the change information indicates an increase in the size, the driving assist information includes information for causing the vehicle to start traveling after the object moves away from the vehicle [see Figs 9b, 10 and 13, p0140 – 0149].
It would have been obvious before the effective date of the claimed invention to one of ordinary skill in the art to modify the device in Browning to include wherein the change introduces an occlusion of a line of sight between a location of the vehicle and an area of interest, and wherein generating the updated path comprises generating a path that establishes a line of sight between one or more sensors of the vehicle and the area of interest, as suggested and taught by ISHII, with a reasonable expectation of success, for the purpose of providing predictive behavior to avoid hazards when the view from the vehicle being driven is obstructed or restricted.
Claim 30 is similarly rejected as Claim 22, see above.
Claim 38 is similarly rejected as Claim 22, see above.
Claim 23, Browning discloses the method of claim 22, wherein the area of interest corresponds to a location from which cross traffic, pedestrians, or cyclists are predicted to originate [see at least p0069 - described with some other examples, the sensor analysis determinations 265 can include object detection regarding the formation of instantaneous road conditions (e.g., new road hazard), as well as pattern detection regarding traffic behavior (e.g., lane formation, turn restrictions in traffic intersections, etc.)].
Claim 31 is similarly rejected as Claim 23, see above.
Claim 39 is similarly rejected as Claim 23, see above.
Claim 24, Browning discloses the method of claim 22, but is silent to wherein determining that the environment in which the vehicle is located includes a change comprises determining that the environment includes a parked vehicle that occludes the line of sight.
However, Silva discloses evaluating trajectories based on predicted occluded regions associated with the trajectories, where the predicted occluded regions are determined with respect to predicted locations of objects in an environment. Further teaching, an occlusion system associated with the vehicle can determine predicted occluded regions associated with objects based on the trajectories for the vehicle to follow in the environment and/or predicted trajectories associated with the detected objects.
For static objects (e.g., parked cars, buildings, etc.) the occlusion system can determine predicted occluded regions for various points along a vehicle trajectory [see p0013 – p0014].
It would have been obvious before the effective date of the claimed invention to one of ordinary skill in the art to modify the device in Browning to include wherein determining that the environment in which the vehicle is located includes a change comprises determining that the environment includes a parked vehicle that occludes the line of sight, as suggested and taught by Silva, with a reasonable expectation of success, for the purpose of providing predictive behavior to avoid hazards when the view from the vehicle being driven is obstructed or restricted.
Claim 32 is similarly rejected as Claim 24, see above.
Claim 40 is similarly rejected as Claim 24, see above.
Claim 25, Browning discloses the method of claim 21, but is silent to wherein generating the updated path for the vehicle to travel comprises causing the vehicle to move forward a predetermined amount.
However, ISHII discloses the driving assist information is generated in accordance with change information regarding a change in size of the object in at least two consecutive time-series images and is output. If the change information indicates an increase in the size, the driving assist information includes at least one of information for causing the vehicle to decelerate and information for causing the vehicle to move in a direction to be away from the object. In addition, if the change information indicates an increase in the size, the driving assist information includes information for causing the vehicle to start traveling after the object moves away from the vehicle [see Figs 9b, 10 and 13, p0140 – 0149].
It would have been obvious before the effective date of the claimed invention to one of ordinary skill in the art to modify the device in Browning to include wherein generating an updated path for the vehicle to travel comprises causing the vehicle to move forward a predetermined amount, as suggested and taught by ISHII, with a reasonable expectation of success, for the purpose of providing predictive behavior to avoid hazards when the view from the vehicle being driven is obstructed or restricted.
Claim 33 is similarly rejected as Claim 25, see above.
Claims 26 and 34 are rejected under 35 U.S.C. 103 in view of Browning (US 20180005407) and Silva (US 20200225669), further in view of KIM (US 20190056749).
Claim 26, Browning discloses the method of claim 21, but is silent to wherein generating the updated path for the vehicle to travel comprises causing the vehicle to advance a portion of the vehicle beyond an object in the environment.
However, Kim discloses a method of generating a route for a vehicle to depart from a parked state based on measured object information. The present disclosure may further provide a method of securing the safety of a user by controlling a door of a vehicle based on object information [see p0009].
Further, the controller 850 may determine whether it is possible to generate a route to an empty space between the objects OB1423 and OB1424, when the object OB1423 and OB1424 exist in the route through which the vehicle performs parking out by moving forward. The controller 850 may determine whether the object OB1423 and OB1424 exists in the space occupied by the vehicle 100 when moving forward, based on the information related to the object OB1423 and OB1424.
For example, the controller 850 may determine whether the object OB1423 and OB1424 exists in the space occupied by the vehicle 100 when moving forward, based on the cross section having the largest area among the cross section perpendicular to the traveling direction of the vehicle 100.
When it is determined that the object OB1423 and OB1424 does not exist in the space occupied by the vehicle 100 when moving forward and a route to an empty space between the objects OB1423 and OB1424 can be generated, the controller 850 may generate a route A1440 through which the vehicle 100 steers to perform parking out. When there is a moving object OB1423, the controller 850 may determine whether it is possible to generate a route to an empty space between the objects OB1423 and OB1424, based on distance and speed information with respect to the moving object OB1423. T
The controller 850 may determine whether the moving object OB1423 enters into the space occupied when moving forward and the entering time, based on the distance and the speed information with respect to the moving object OB1423 The controller 850 may determine whether it is possible to generate a route to an empty space between the objects OB1423 and OB1424, based on information on whether the moving object OB1423 enters into the space occupied when moving forward and the entering time [see p0464 – o0469 and Fig 14B].
It would have been obvious before the effective date of the claimed invention to one of ordinary skill in the art to modify the device in Browning as modified to include wherein generating the updated path for the vehicle to travel comprises causing the vehicle to advance a portion of the vehicle beyond an object in the environment, as suggested and taught by Kim, with a reasonable expectation of success, for the purpose of providing a system to determine whether the object is located in a space through which the vehicle passes based on the vehicle traveling in a forward direction.
Claim 34 is similarly rejected as Claim 26, see above.
Claims 27 and 35 are rejected under 35 U.S.C. 103 in view of Browning (US 20180005407) and Silva (US 20200225669), further in view of Karasev (US 20200272148).
Claim 27, Browning discloses the method of claim 21, but is silent to wherein generating the updated path for the vehicle to travel comprises causing the vehicle to move forward an amount corresponding to a change in dimensions of an object based on the sensor data.
However, Karasev discloses a non-limiting list of objects may include obstacles in an environment, including but not limited to pedestrians, animals, cyclists, trucks, motorcycles, other vehicles, or the like. Such objects in the environment have a “geometric pose” (which may also be referred to herein as merely “pose”) comprising a location and/or orientation of the overall object relative to a frame of reference. In some examples, pose may be indicative of a position of an object (e.g., pedestrian), an orientation of the object, or relative appendage positions of the object. Geometric pose may be described in two-dimensions (e.g., using an x-y coordinate system) or three-dimensions (e.g., using an x-y-z or polar coordinate system), and may include an orientation (e.g., roll, pitch, and/or yaw) of the object [see p0009].
Further teaching, a prediction system of a computing device of the autonomous vehicle can include a machine learning model trained to output data that can be used to generate one or more predicted trajectories of objects proximate to the autonomous vehicle. For example, the machine learning model can output coordinates (e.g., x-coordinates and y-coordinates) associated with the object (e.g., a third-party vehicle) at one or more times in the future (e.g., 1 second, 2 seconds, 3 seconds, etc.). In some examples, the machine learning model can output coordinates associated with the object as well as probability information associated with each coordinate [see at least p0016].
It would have been obvious before the effective date of the claimed invention to one of ordinary skill in the art to modify the device in Browning as modified to include, wherein generating the updated path for the vehicle to travel comprises causing the vehicle to move forward an amount corresponding to a change in dimensions of an object based on the sensor data, as suggested and taught by Karasev, with a reasonable expectation of success, for the purpose of providing a perception system for the autonomous vehicle to recognize objects in the environment so that the autonomous vehicle can plan a safe route through the environment.
Claim 35 is similarly rejected as Claim 27, see above.
Claims 28 and 36 are rejected under 35 U.S.C. 103 in view of Browning (US 20180005407) and Silva (US 20200225669), further in view of Jafari Tafti (US 20180348767).
Claim 28, Browning discloses the method of claim 21, but is silent to wherein generating the updated path for the vehicle to travel comprises generating an updated driving plan that causes the vehicle to change lanes.
However, Jafari Tafti discloses a processor-implemented method for automated driving of a vehicle includes the steps of receiving, by one or more data processors, vehicle state data, map data, and vehicle object environment data, generating, by the one or more data processors, a first trajectory path that is optimal with respect to the vehicle state data, the map data, and the vehicle object environment data, determining, by the one or more data processors [see Summary].
Further disclosing, FIG. 4 illustrates components of a trajectory refining system 112 which may be embedded within the controller 34. Inputs to the trajectory refining system 112 may be received from the trajectory planning system 100, the sensor system 28, other control modules (not shown) associated with the autonomous vehicle 10, the communication system 36, and/or determined/modeled by other sub-modules (not shown) within the controller 34. Threat assessment module 118 reviews and checks the updated trajectory for possible collisions with obstacles and/or vehicles. Checking the updated trajectory includes reviewing sensor data received from the sensor fusion module 102 for possible collisions with detected obstacles and/or vehicles. For example, the module 122 can generate a lane change trajectory, if needed. The updated trajectories generated by modules 120, 122 are also reviewed and checked by the threat assessment module 118 for possible collisions with obstacles and/or vehicles [see Fig 4, p0066 – p0068, p0082].
It would have been obvious before the effective date of the claimed invention to one of ordinary skill in the art to modify the device in Browning as modified to include, but is silent to wherein generating the updated path for the vehicle to travel comprises generating an updated driving plan that causes the vehicle to change lanes, as suggested and taught by Jafari Tafti, with a reasonable expectation of success, for the purpose of providing trajectory planning that improves the computational efficiency of trajectory planning to generate a safe and feasible trajectory that satisfies the known constraints.
Claim 36 is similarly rejected as Claim 28, see above.
Conclusion
The examiner has pointed out particular references contained in the prior art of record in the body of this action for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. Applicant should consider the entire prior art as applicable as to the limitations of the claims. It is respectfully requested from the applicant, in preparing the response, to consider fully the entire references as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RENEE LAROSE whose telephone number is (313)446-4856. The examiner can normally be reached on Monday - Friday 8:30am - 5:00pm EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached on (571) 270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Renee LaRose/ Examiner, Art Unit 3657
/ABBY LIN/ Supervisory Patent Examiner, Art Unit 3657