Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 21-22 and 24-27 and 28-29, 31-34, 35-36, and 38-40 are rejected under 35 U.S.C. sec. 103 as being unpatentable as obvious in view of U.S. Patent No.: 10,421,453 82 to Ferguson et al. filed in 2014 and in view of Japanese Patent Pub. No.: 2014-203168A to Tagawa that was filed in 2013 and in view of United States Patent Application Pub. No.: US20190025841A1 to HAYNES et al. that was filed in 2017 (hereinafter “HAYNES”).
In regard to claim 21, and 28 and 35, FERGUSON discloses “...21. (New) A system comprising:
one or more processors; and
one or more non-transitory computer-readable media storing instructions executable by the
one or more processors, wherein the instructions, when executed, cause the system to perform and(see FIG. 11 computing device and
processor 120)
operations comprising:
receiving map data associated with an environment; (see F!G.1 and a
perception system 172 and device 401 in FIG. 4 that can reflect the pulses and prepare a 3d point cloud of the environment surrounding the AV).
PNG
media_image1.png
696
624
media_image1.png
Greyscale
receiving sensor data from a sensor associated with a vehicle in the environment; (see vehicle 100 scanning vehicle 714 in the path of the vehicle 100)
determining, based at least in part on the sensor data, object data associated with an object
in the environment, (see FIG. 7c where the vehicle is the object 714 and the object trajectories are shown and 720-1, 720-2 and 730 and 740 with each of the trajectories being shown as having a probability (10 to 20 percent) of being executed with some going straight and some turning left and some turning right with the bounding box shown the cross walks being in front of the vehicle where an object may move into as 530 and 532 and an unnumbered cross walk in front of 714)
PNG
media_image2.png
447
506
media_image2.png
Greyscale
Tagawa teaches “.the object data comprising at least one of a semantic label of the object, a class
associated with the object, a bounding box representing the object, a velocity of the object, or an acceleration of the object; (see FIG. 2 where the bounding box is placed over the vehicles as m1, m2 and m3 to show the trajectory of the vehicle; and this is in a top down view; see also step 58 where the speed, and the acceleration of the vehicle is shown and a position distribution of the two vehicles)”.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of TAGAWA with the disclosure of Ferguson with a reasonable expectation of success since TAGAWA teaches that a bounding box can be provided over the pedestrian in FIG. 6 as element s1 and the position, speed and acceleration can be shown for the vehicle and pedestrian. Also in FIG. 2, a bounding box can be provided as M1, M2 and M3 to show the trajectory and speed, and acceleration and path of the own vehicle and other vehicles. This can provide a trajectory avoidance and collision avoidance for increased safety. The method does not use the absolute location but a prediction within the box. This prediction of movement in the box can provide an improved accuracy. See paragraph 1-12 of Tagawa.
The primary reference is silent but Haynes teaches “...inputting the object data and the map data into a machine learned model; (see abstract where an autonomous vehicle can include a prediction system that, for each object perceived by the autonomous vehicle, generates one or more potential goals, selects one or more of the potential goals, and develops one or more trajectories by which the object can achieve the one or more selected goals. The prediction systems and methods described herein can include or leverage one or more machine-learned models that assist in predicting the future locations of the objects.)
PNG
media_image3.png
684
896
media_image3.png
Greyscale
receiving, from the machine learned model and based at least in part on the object data and
the map data,
a prediction probability associated with movement of the object in the environment; (see Fig. 1-2 where the processors can include a motion planning system and a perception system that can provide a prediction and then different future trajectories to provide a scenario development 206 and a motion planning 105)
and
controlling, based at least in part on the prediction probability, the vehicle to traverse the environment”. (see paragraph 150-153 where an area where the object has fallen and remains as static to be avoided at all costs via an instruction).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of HAYNES with the disclosure of Ferguson with a reasonable expectation of success since HAYNES teaches that a machine learning computing system can include one or more models that can provide a prediction to provide a predicted future location of objects in the road for collision avoidance. This can use rules of the road and the type of object a static object or a moving car for collision avoidance purposes for motion planning to avoid the vehicles in the lanes using an estimation of the capability of the vehicle or truck to determine a future position and path in the future. See paragraph 50, 110-128.
In regard to claim 22 and 29 and 36, Ferguson discloses “....22. (New) The system of claim 21, wherein the map data includes semantic information
associated with the environment, the semantic information comprising at least one of road network
information or a traffic light status”. )(see FIG. 7c where the vehicle is the object 714 and the object trajectories are shown and 720-1, 720-2 and 730 and 740 with each of the trajectories being shown as having a probability (10 to 20 percent) of being executed with some going straight and some turning left and some turning right with the bounding box shown the cross walks being in front of the vehicle where an object may move into as 530 and 532 and an unnumbered cross walk in front of 714; the first channel is shown as the road having a shape and an intersection as shown in FIG. 7c, a second channel shows the trajectories as 4 different trajectories, a third channel shows the probability of the each trajectory; a fifth channel shows the cross walks and a stop sign with shading)
Claims 23 and 30 and 37 are rejected under 35 U.S.C. sec. 103 as being unpatentable as obvious in view of U.S. Patent No.: 10,421,453 82 to Ferguson et al. filed in 2014 and in view of Japanese Patent Pub. No.: 2014-203168A to Tagawa that was filed in 2013 and in view of United States Patent Application Pub. No.: US20190025841A1 to HAYNES et al. that was filed in 2017 (hereinafter “HAYNES”) and in view of U.S. Patent App. Pub. No.: US 2017/0031361 A1 to Olson which was filed in 2015.
In regard to claim 23 and 30 and 37, Ferguson is silent but Olson teaches “23. (New) The system of claim 21, wherein the prediction probability comprises at least
one of:
a multi modal Gaussian trajectory; or
an occupancy grid associated with a future time, wherein a cell of the occupancy grid is
indicative of a probability of the object being in a region associated with the cell at the future time”. (see paragraph 60-70 and 64);
It would have been obvious for one of ordinary skill in the art before the effective fling date of the disclosure to combine the disclosure of Ferguson and the teachings of OLSON since OLSON teaches that a machine learning neural network can study traffic data. The model can identify traffic data and behavior data. The model can then assign a number of trajectories of where a vehicle may move to and a probability being associated with each possible trajectory. Then a simulation can be run and a host vehicle policy can be taken for issuing better and safer control commands based on the planned and likely trajectory of the vehicle and other non-host vehicles. A change and paint of change can also be taken. The Bayesian information criterion is a known approximations that avoids marginalizing over the policy parameters and provides a principled penalty against complex policies 46 by assuring a Gaussian posterior model around the estimated parameters. Thus, a maximum likelihood estimation (MLE) method can be reached. This can provide for identification of trajectory results that are likely and or if there is an anomaly. See paragraph 40 and 60-70 of Olson.
In regard to claim 24 and 31 and 38, Haynes teaches “...24. (New) The system of claim 21, wherein the machine learned model comprises an
encoder and a decoder. (see paragraph 171 and 168 where the ai includes a learning model and a machine learning algorithm that can code and decode and classify and score the model and provides a goal and scoring of the goal in paragraph 178-181)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of HAYNES with the disclosure of Ferguson with a reasonable expectation of success since HAYNES teaches that a machine learning computing system can include one or more models that can provide a prediction to provide a predicted future location of objects in the road for collision avoidance. This can use rules of the road and the type of object a static object or a moving car for collision avoidance purposes for motion planning to avoid the vehicles in the lanes using an estimation of the capability of the vehicle or truck to determine a future position and path in the future. See paragraph 50, 110-128.
In regard to claim 25 and 32 and 39, Haynes teaches “...25. (New) The system of claim 24, wherein the decoder comprises one or more of:
a recurrent neural network;
a network configured to regress a plurality of prediction probabilities substantially
simultaneously; or
a network comprising a two dimensional convolutional-transpose network. (see paragraph 39-41 and 49-56 and 110-119).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of HAYNES with the disclosure of Ferguson with a reasonable expectation of success since HAYNES teaches that a machine learning computing system can include one or more models that can provide a prediction to provide a predicted future location of objects in the road for collision avoidance. This can use rules of the road and the type of object a static object or a moving car for collision avoidance purposes for motion planning to avoid the vehicles in the lanes using an estimation of the capability of the vehicle or truck to determine a future position and path in the future. See paragraph 50, 110-128.
In regard to claim 26 and 33 and 40, Haynes teaches “...26. (New) The system of claim 21, the operations further comprising determining, based
on the prediction probability and a vehicle dynamics model associated with the object, a predicted
trajectory associated with the object”. (see paragraph 123-137)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of HAYNES with the disclosure of Ferguson with a reasonable expectation of success since HAYNES teaches that a machine learning computing system can include one or more models that can provide a prediction to provide a predicted future location of objects in the road for collision avoidance. This can use rules of the road and the type of object a static object or a moving car for collision avoidance purposes for motion planning to avoid the vehicles in the lanes using an estimation of the capability of the vehicle or truck to determine a future position and path in the future. See paragraph 50, 110-128.
In regard to claim 27, Haynes teaches “...27. (New) The system of claim 26, wherein the vehicle dynamics model includes at least a
velocity cost, (see paragraph 38) a position cost, an acceleration cost, (see paragraph 50) and rules of the road (see paragraph 31)”.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of HAYNES with the disclosure of Ferguson with a reasonable expectation of success since HAYNES teaches that a machine learning computing system can include one or more models that can provide a prediction to provide a predicted future location of objects in the road for collision avoidance. This can use rules of the road and the type of object a static object or a moving car for collision avoidance purposes for motion planning to avoid the vehicles in the lanes using an estimation of the capability of the vehicle or truck to determine a future position and path in the future. See paragraph 50, 110-128.
The primary reference is silent but Tagawa teaches 34. (New) The one or more non-transitory computer-readable media of claim 28, wherein
the object data comprises the bounding box representing the object, and wherein the bounding box
representing the object is a three-dimensional bounding box”. (see FIG. 2 where the bounding box is placed over the vehicles as m1, m2 and m3 to show the trajectory of the vehicle; and this is in a top down view; see also step 58 where the speed, and the acceleration of the vehicle is shown and a position distribution of the two vehicles and where the state, speed and acceleration for each of the bounding box can be provided via sensor 2 In step S8, the own vehicle state recognition unit 13 identifies the speed and acceleration of the own vehicle P and recognizes the own vehicle state. Then, the own vehicle predicted movement distribution generation unit 14 generates a position distribution of the own vehicle P based on the speed of the own vehicle P and the like (step S9). Next, the collision determination unit 15 calculates the collision probability based on the position distribution of the movable object R1 and the position distribution of the host vehicle P (step S10), and ends the process. See also FIG. 6 where a top down view of the pedestrian crossing is also show with the speed and trajectory as R1)
PNG
media_image4.png
533
496
media_image4.png
Greyscale
It would have been obvious for one of ordinary skill in the art before the effective filing date of the present disclosure to combine the teachings of TAGAWA with the disclosure of Ferguson with a reasonable expectation of success since TAGAWA teaches that a bounding box can be provided over the pedestrian in FIG. 6 as element s1 and the position, speed and acceleration can be shown for the vehicle and pedestrian. Also in FIG. 2, a bounding box can be provided as M1, M2 and M3 to show the trajectory and speed, and acceleration and path of the own vehicle and other vehicles. This can provide a trajectory avoidance and collision avoidance for increased safety. The method does not use the absolute location but a prediction within the box. This prediction of movement in the box can provide an improved accuracy. See paragraph 1-12 of Tagawa.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 21-40 are rejected under obviousness double patenting in view of claim 1-8 of U.S. Patent No.: 12183204 that recites “a system that includes a top down perspective and a multi channel machine learning model and controlling the vehicle based on the multi channeled model in the neural network”.
The only difference is in claim of the present claims it lacks the multichannel model however this is obvious in view of HAYNES to organize the inputs differently.
The claims are otherwise identical.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEAN PAUL CASS whose telephone number is (571)270-1934. The examiner can normally be reached Monday to Friday 7 am to 7 pm; Saturday 10 am to 12 noon.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott A. Browne can be reached at 571-270-0151. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JEAN PAUL CASS/Primary Examiner, Art Unit 3666