DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
All amendments to claims as filed on 1/22/26 have been entered and action follows:
Response to Arguments
Rejection under 35 USC 101
Per the applicant’s amendments and arguments rejections under 35 USC 101 are withdrawn.
Rejection under 35 USC 112(b)
Per the applicant’s amendments rejections under 35 USC 101 are withdrawn.
Rejection under 35 USC 103
Applicant argues that amended independent claim 1 is not disclose or taught in the combination of Eiffert and Li.
Examiner respectfully disagrees and note the rejections below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-6 and 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Probabilistic Crowd GAN: Multimodal Pedestrian Trajectory Prediction using a Graph Vehicle-Pedestrian Attention Network, by Eiffert et al. in view of Coordination and Trajectory Prediction for Vehicle Interactions via Bayesian Generative Modeling, by Li et al.
With respect to claim 1, Eiffert discloses A method for predicting a pedestrian trajectory, performed by a processor, the method comprising (see Abstract):
collecting, by the processor, pedestrian images from a Closed-Circuit Television (CCTV) or a predetermined database; identifying, by the processor, the pedestrian trajectory of a target pedestrian in the pedestrian images, (see figure 2, observed pedestrian trajectories, [CCTV cameras are obvious to observe a scene] page 3, left hand column first paragraph …observed position of each pedestrian…);
sampling, by the processor, based on the pedestrian trajectory of the target pedestrian, a predetermined number of latent vectors
extracting, by the processor, a pedestrian feature vector from the pedestrian trajectory;
determining, by the processor, an expected trajectory of the target pedestrian by applying the pedestrian feature vector and the latent vectors to a neural network model, (see page 2, left hand column, first paragraph …Probabilistic Crowd GAN (PCGAN), allows for the direct prediction of probabilistic multimodal outputs during adversarial training. We make use of a Mixture Density Network (MDN) within the GAN’s generator to output a Gaussian mixture model (GMM) for each pedestrian, demonstrating how clustering of each component of the GMM allows the finding of likely modal-paths, that can then be compared to ground truth trajectories by the GAN’s discriminator); and
outputting, by the processor, the expected trajectory of the target pedestrian by applying the pedestrian feature vector and the latent vector to one of Gaussian distribution, Generative Adversarial Network (GAN), and Conditional Variational AutoEncoder (CVAE), and calculating and outputting, by the processor, an occurrence probability of the expected trajectory, (see page 4, for gaussian distribution, generative adversarial network and see Li for conditional variational auto encoder), as claimed.
However, Eiffert fails to disclose a predetermined number of latent vectors among a plurality of random vectors, which are determined according to a Monte Carlo or a Quasi-Monte Carlo method and are corresponding to an intention of the target pedestrian non-stochastically, (emphasis added) as claimed.
Li teaches a predetermined number of latent vectors among a plurality of random vectors, which are determined according to a Monte Carlo or a Quasi-Monte Carlo method and are corresponding to an intention of the target pedestrian non-stochastically, (emphasis added; see page 2499, right hand column Instead of making point mass estimation on the weight of neural networks in standard GAN methods, we introduce weight uncertainty by placing distributions over θg and θd to increase the diversity of generated samples as well as to alleviate the overfitting and mode collapse problems in the training process; also last paragraph wherein …To sample from the posterior distribution, we employ the Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) method [18]…) as claimed.
It would have been obvious to one ordinary skilled in the art at the effective date of invention to combine the two references as they are analogous because they are solving similar problem of pedestrian trajectory determination using image analysis. The teaching of Li to use multiple vectors corresponding to the intention of the pedestrian can be incorporated into Eiffert system as suggested (see page figure 2 Generator needing training), for suggestion, and modifying the system yields a better prediction of the pedestrian trajectory (see Abstract of Li), for motivation.
With respect to claim 3, combination of Eiffert and Li further discloses wherein the identifying comprises: detecting a location of the target pedestrian for each frame, and identifying the pedestrian trajectory, (see Eiffert page 3, left hand column first paragraph …observed position of each pedestrian…), as claimed.
With respect to claim 4, combination of Eiffert and Li further discloses wherein the sampling of the latent vectors non-stochastically includes sampling the predetermined number of latent vectors in an order in which trajectories predicted by the plurality of random vectors are most similar to an actual trajectory of the target pedestrian upon learning the neural network model, (see Eiffert page 3, right hand column second paragraph …from which we compute the set of likely modal paths…), as claimed.
With respect to claim 5, combination of Eiffert and Li further discloses wherein the sampling of the latent vectors non-stochastically includes sampling the predetermined number of latent vectors by applying a loss function which decreases as the trajectories predicted by the plurality of random vectors are more similar to the actual trajectory of the target pedestrian to the neural network model, (see Eiffert page 3, last four lines of left hand column to first two line of right and column), as claimed.
With respect to claim 6, combination of Eiffert and Li further discloses wherein the sampling of the latent vectors non-stochastically includes sampling the predetermined number of latent vectors in an order in which a distance between respective trajectories predicted by the plurality of random vectors are largest upon learning the neural network model, (see Eiffert page 5, right hand column last paragraph … This can result in a prediction with an incorrect speed profile but correct direction having a similar error as a prediction with the completely wrong direction, which is a significantly worse result. As such, Modified Hausdorff Distance (MHD) [31], which does not suffer this issue, is also included as an evaluation metric.
The metrics used are as follows:…MHD: A measure of similarity between trajectories, determining the largest distance from each predicted point to any point on the ground truth trajectory), as claimed.
With respect to claim 10, combination of Eiffert and Li further discloses wherein the sampling of the latent vectors non-stochastically includes extracting an interaction-aware feature between the target pedestrian and a surrounding pedestrian, and reflecting the interaction-aware feature to sample the latent vector, (see Eiffert page 4, left hand column first paragraph …included to allow the attention module to account for …each ped-ped relationship…), as claimed.
With respect to claim 11, combination of Eiffert and Li further discloses wherein the extracting of the interaction-aware feature includes extracting the interaction-aware feature through a graph attention network (GAT), and inputting the interaction-aware feature into a multi-layer perceptron (MLP) to sample the latent vector, (see Eiffert page 4, section graph vehicle pedestrian attention network), as claimed.
With respect to claim 12, combination of Eiffert and Li further discloses, wherein the neural network model is learned by using a training dataset constituted by the pedestrian trajectory of the target pedestrian for a first time interval of the pedestrian image and the pedestrian trajectory of the target pedestrian for a second time interval continued to the first time interval, (see Eiffert page 4, right hand column … This requires extracting individual tracks from the GMM Yˆ, whilst preserving the multimodality of the distribution. We achieve this by adapting the multiple prediction adaptive clustering algorithm (MultiPAC) proposed by Zyner et al. [15] to allow backpropagation for use during training. MultiPAC finds the set of likely ‘modal paths’ Y¯, for each pedestrian from Yˆ. It achieves this by clustering the components of the GMM at each timestep using DBSCAN [25], determining each cluster’s centroid from the weighted average of all Gaussians in the mixture. Clusters in subsequent timesteps are assigned to parent clusters…), as claimed.
Allowable Subject Matter
Claims 7-9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VIKKRAM BALI whose telephone number is (571)272-7415. The examiner can normally be reached Monday-Friday 7:00AM-3:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VIKKRAM BALI/Primary Examiner, Art Unit 2663