DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 06/15/2023, 08/09/2023, 01/03/2024, 03/11/2024, 08/16/2024, 10/04/2024, 12/09/2024 and 01/27/2025 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 1,
Step 1: Is the claim to a process, machine, manufacture or composition of matter?
Claim 1 is directed to a machine.
Step 1: yes.
Step 2A, prong 1: Does the claim recite an abstract idea, law of nature, or natural phenomenon?
training a SNN based on two or more of the sensor signals as input and a distance-based loss for the two or more sensor signals; and back-propagating the distance-based loss to further train the SNN. (limitation is directed to a mathematical concept, in view of applicant’s specification paragraphs 0041-0043.)
Step 2A, prong 1: If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. According, the claim “recites” an abstract idea.
Step 2A, prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application?
a set of two or more sensors receiving two or more sensor signals; a memory storing one or more instructions; a processor executing one or more of the instructions stored on the memory to perform: (e.g., mere instruction to apply the judicial exception using generic computer components (MPEP 2106.05(f)).
Step 2A, prong 2: Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
a set of two or more sensors receiving two or more sensor signals; a memory storing one or more instructions; a processor executing one or more of the instructions stored on the memory to perform: (e.g., mere instruction to apply the judicial exception using generic computer components (MPEP 2106.05(f)).
Step 2B: Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 2 and analogous claims 12 and 18,
Claim 2 incorporates the analysis of the machine of claim 1.
Step2A/Step 2B:
wherein a sensor signal of the two or more of the sensor signals include a heart rate sensor signal, a gaze sensor signal, a pupil size sensor signal, a grip force sensor signal, a controller area network (CAN) signal, or a foot position sensor signal. (e.g., field of use, see MPEP 2106.05(h)).
Step 2A, prong 2: Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 3 and analogous claims 13 and 19,
Claim 3 incorporates the analysis of the machine of claim 1.
Step2A/Step 2B:
wherein the SNN is a Siamese convolutional neural network (SCNN). (e.g., mere instruction to apply the judicial exception using generic computer components (MPEP 2106.05(f)).
Step 2A, prong 2: Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 4 and analogous claims 14 and 20,
Claim 4 incorporates the analysis of the machine of claim 3.
Step2A/Step 2B:
wherein the SCNN includes symmetrical convolutional neural networks (CNN). (e.g., mere instruction to apply the judicial exception using generic computer components (MPEP 2106.05(f)).
Step 2A, prong 2: Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 5 and analogous claim 15,
Claim 5 incorporates the analysis of the machine of claim 1.
Step 2A, prong 1:
wherein the distance-based loss is calculated using Euclidean distance. (limitation is directed to a mathematical concept, in view of applicant’s specification paragraphs 0041-0043.)
Step 2A, prong 1: If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. According, the claim “recites” an abstract idea.
Regarding claim 6 and analogous claim 16,
Claim 6 incorporates the analysis of the machine of claim 1.
Step 2A, prong 1:
wherein the distance-based loss is calculated using a contrastive loss function. (limitation is directed to a mathematical concept, in view of applicant’s specification paragraphs 0041-0043.)
Step 2A, prong 1: If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. According, the claim “recites” an abstract idea.
Regarding claim 7,
Claim 7 incorporates the analysis of the machine of claim 1.
Step 2A, prong 1:
wherein the training the SNN includes learning a similarity function. (limitation is directed to a mathematical concept, in view of applicant’s specification paragraphs 0041-0043.)
Step 2A, prong 1: If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. According, the claim “recites” an abstract idea.
Regarding claim 8,
Claim 8 incorporates the analysis of the machine of claim 1.
Step 2A, prong 1:
wherein the SNN is trained using one-shot learning. (limitation is directed to a mathematical concept, in view of applicant’s specification paragraphs 0041-0043.)
Step 2A, prong 1: If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. According, the claim “recites” an abstract idea.
Regarding claim 9,
Claim 9 incorporates the analysis of the machine of claim 1.
Step2A/Step 2B:
wherein the SNN is trained based on drive context information as input. (e.g., mere instruction to apply the judicial exception using generic computer components (MPEP 2106.05(f)).
Step 2A, prong 2: Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 10,
Claim 10 incorporates the analysis of the machine of claim 1.
Step2A/Step 2B:
wherein the trained SNN outputs an adaptive driving style prediction based on two or more sensor signals received during an execution phase. (e.g., mere instruction to apply the judicial exception using generic computer components (MPEP 2106.05(f)).
Step 2A, prong 2: Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 11,
Step 1: Is the claim to a process, machine, manufacture or composition of matter?
Claim 11 is directed to a machine.
Step 1: yes.
Step 2A, prong 1: Does the claim recite an abstract idea, law of nature, or natural phenomenon?
calculating a first distance between the input data and a first class of a set of anchor…; (limitation is directed to a mathematical concept in view applicant’s specification, see paragraphs 0037 and 0041)
calculating a second distance between the input data and a second class of the set of anchor data…; (limitation is directed to a mathematical concept in view applicant’s specification, see paragraphs 0037 and 0041)
generating an adaptive driving style prediction based on the first distance and the second distance, (limitation is directed to a mental process where one can mentally evaluate the first and second distances and generate driving style predictions which could be aggressive or defensive driving as per specification.)
wherein the trained SNN is trained based on two or more sensor signals received during a training phase, a distance-based loss for the two or more sensor signals from the training phase, and by back-propagating the distance-based loss. (limitation is directed to a mathematical concept in view applicant’s specification, see paragraphs 0041-0043)
Step 2A, prong 1: If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. According, the claim “recites” an abstract idea.
Step 2A, prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application?
a set of two or more sensors receiving two or more sensor signals as input data; a memory storing one or more instructions; a processor executing one or more of the instructions stored on the memory to perform: (e.g., mere instruction to apply the judicial exception using generic computer components (MPEP 2106.05(f)).
using the trained SNN; (e.g., mere instruction to apply the judicial exception using generic computer components (MPEP 2106.05(f)).
Step 2A, prong 2: Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
a set of two or more sensors receiving two or more sensor signals as input data; a memory storing one or more instructions; a processor executing one or more of the instructions stored on the memory to perform: (e.g., mere instruction to apply the judicial exception using generic computer components (MPEP 2106.05(f)).
using the trained SNN; (e.g., mere instruction to apply the judicial exception using generic computer components (MPEP 2106.05(f)).
Step 2B: Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 17,
Step 1: Is the claim to a process, machine, manufacture or composition of matter?
Claim 17 is directed to a process.
Step 1: yes
The rest of the analysis for claim 17 is analogous to claim 11.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 3-8 are rejected under 35 U.S.C. 103 as being unpatentable over Tai et al (US Published Patent Application No. 20190244017, "Tai"), in view of Gao et al (Manifold Siamese Network: A Novel Visual Tracking ConvNet for Autonomous Vehicles, "Gao").
In regard to claim 1, Tai teaches A system for Siamese neural network (SNN) based adaptive driving style prediction, comprising: a set of two or more sensors receiving two or more sensor signals; (Tai, paragraph 0005, “An objective of the present invention is to provide a gesture recognition method and a gesture recognition system using siamese neural network.“ and paragraph 0008, “receiving a first training signal from a sensor to calculate a first feature by the first neural network unit;” and paragraph 0009, “receiving a second training signal from the sensor to calculate a second feature by the second neural network unit;”, examiner would like to point out that the sensor being used is a Range Doppler. This is being interpreted as 2 sensors due to the region of it being able to sense two different signals, distance and velocity.)
training a SNN based on two or more of the sensor signals as input and a distance-based loss for the two or more sensor signals; and (Tai, paragraph 0030, “With reference to FIG. 2, the gesture recognition system using siamese neural network includes a sensor 10, a first neural network unit 11, a second neural network unit 12, a weight sharing unit 13, a similarity analysis unit 14, and a weight controlling unit 15.” And paragraph 0036, “The present invention can use two neural networks to generate two features, and can determine a similarity between the two features for training the first neural network unit and the second neural network unit.“ and paragraph 0035, “The similarity analysis unit 14 determines a distance between the first feature and the second feature in the feature space.” And paragraph 0058, “…the distance determined by the similarity analysis unit 14 is calculated by a contrastive loss function.”)
However, Tai does not explicitly teach a memory storing one or more instructions;
a processor executing one or more of the instructions stored on the memory to perform:
back-propagating the distance-based loss to further train the SNN.
Gao teaches a memory storing one or more instructions; (Gao, pg. 1618, Col. 1, paragraph 1, “The proposed tracker was implemented in Matlab2017a based on MatConvNet. All experiments were carried on a PC with 3.6 GHz Intel i7 CPU, 16 GB RAM, and an Nvidia GTX 1080Ti GPU.”)
a processor executing one or more of the instructions stored on the memory to perform: (Gao, pg. 1618, Col. 1, paragraph 1, “The proposed tracker was implemented in Matlab2017a based on MatConvNet. All experiments were carried on a PC with 3.6 GHz Intel i7 CPU, 16 GB RAM, and an Nvidia GTX 1080Ti GPU.”)
back-propagating the distance-based loss to further train the SNN. (Gao, pg. 1613, Col. 2, paragraph 2, “CFNet [19] interpreted the closed-form solution correlation filter learner as a differentiable layer of a deep neural network in which the gradient can back-propagate when online tracking.”)
Tai and Gao are related to the same field of endeavor (i.e. Siamese nerual networks). I view of the teachings of Gao, it would have been obvious for a person with ordinary skill in the art to apply the teachings of Gao to Tai before the effective filing date of the claimed invention in order to allow for efficiency in the similarity features. (Gao, pg. 1614, Col. 2, paragraph 1, “Due to a fully-convolutional network can compute the similarity at all translated subwindows between template image and search image, such similarity evaluation performs much more efficient than exhaustive search.”)
In regard to claim 3, Tai and Gao teach the system of claim 1.
Tai further teaches wherein the SNN is a Siamese convolutional neural network (SCNN). (Tai, paragraph 0058, “In the above embodiments, the first neural network unit 11 and the second neural network unit 12 execute convolutional neural networks (CNNs) or recurrent neural networks (RNNs),”)
Tai and Gao are combinable for the same rationale as set forth above with respect to claim 1.
In regard to claim 4, Tai and Gao teach the system of claim 3.
Tai further teaches wherein the SCNN includes symmetrical convolutional neural networks (CNN). (Tai, paragraph 0058, “In the above embodiments, the first neural network unit 11 and the second neural network unit 12 execute convolutional neural networks (CNNs) or recurrent neural networks (RNNs),”)
Tai and Gao are combinable for the same rationale as set forth above with respect to claim 1.
In regard to claim 5, Tai and Gao teach the system of claim 1.
Gao further teaches wherein the distance-based loss is calculated using Euclidean distance. (Gao, pg. 1618, Col. 1, paragraph 4, “The latter precision plot expresses the curve of center location error (CLE), which calculates the average Euclidean distance between the center locations of objects and the ground-truth positions of tracking sequence.”, Examiner would like to point out that using the Euclidean distance for the center location error is finding the distance between the target area and where a point really landed, which in turn is being interpreted as the loss.)
Tai and Gao are combinable for the same rationale as set forth above with respect to claim 1.
In regard to claim 6, Tai and Gao teach the system of claim 1.
Tai further teaches wherein the distance-based loss is calculated using a contrastive loss function. (Tai, paragraph 0058, “In the above embodiments, the first neural network unit 11 and the second neural network unit 12 execute convolutional neural networks (CNNs) or recurrent neural networks (RNNs), the first training signal and the second training signal are Range Doppler Image (RDI) signals, and the distance determined by the similarity analysis unit 14 is calculated by a contrastive loss function.”)
Tai and Gao are combinable for the same rationale as set forth above with respect to claim 1.
In regard to claim 7, Tai and Gao teach the system of claim 1.
Tai further teaches wherein the training the SNN includes learning a similarity function. (Tai, paragraph 0055, “The similarity analysis unit 14 may calculate the distance between the sensing feature from the sensor 10 and the reference feature from the database 16 in the feature
space to determine the gesture event.”)
Tai and Gao are combinable for the same rationale as set forth above with respect to claim 1.
In regard to claim 8, Tai and Gao teach the system of claim 1.
Gao further teaches wherein the SNN is trained using one-shot learning. (Gao, pg. 1613, Col. 2, paragraph 2, “Deeply, a quadruplet network with one-shot learning [35] derive from Siamese network consist of four branches that receive multiple tuple instances as inputs.”)
Tai and Gao are combinable for the same rationale as set forth above with respect to claim 1.
Claims 2, 9-20 are rejected under 35 U.S.C. 103 as being unpatentable over Tai, in view of Gao, and in further view of Dang et al (Driver Information Embedding with Siamese LSTM networks, "Dang").
In regard to claim 2 and analogous claims 12 and 18, Tai and Gao teach the system of claim 1.
However, Tai and Gao do not explicitly teach wherein a sensor signal of the two or more of the sensor signals include a heart rate sensor signal, a gaze sensor signal, a pupil size sensor signal, a grip force sensor signal, a controller area network (CAN) signal, or a foot position sensor signal.
Dang teaches wherein a sensor signal of the two or more of the sensor signals include a heart rate sensor signal, a gaze sensor signal, a pupil size sensor signal, a grip force sensor signal, a controller area network (CAN) signal, or a foot position sensor signal. (Dang, pg. 935, Col. 1, Intro., paragraph 2, “Although there are different techniques that can be used to precisely identify a driver using a camera with face detection[a gaze sensor signal, a pupil size sensor signal], voice recognition or even with special sensors such as finger print identification, driver identification based on the vehicle dynamics information is still a major research topic, because vehicle dynamics information are widely available and can be extracted from the CAN-Bus.”)
Tai, Gao and Dang are related to the same field of endeavor (i.e. Siamese neural networks). In view of the teachings of Dang, it would have been obvious for a person with ordinary skill in the art to apply the teachings of Dang to Tai and Gao before the effective filing date of the claimed invention in order to improve identification performance of driver. (Dang, pg. 935, Col. 1, I. Intro, paragraph 1, “Most approaches focus on discovering features that can be used to improve the identification performance, often using machine learning methods.”)
In regard to claim 9, Tai and Gao teach the system of claim 1.
However, Tai and Gao do not explicitly teach wherein the SNN is trained based on drive context information as input.
Dang teaches wherein the SNN is trained based on drive context information as input. (Dang, pg. 935, Col. 2, II. Problem Formulation, A. Driver Classification, paragraph 1, “In other words, given a sample x, a model has to predict to which driver Di does x belong to. The drivers are the classes and are fixed before the training and testing process. The output of such models are usually estimates P(Di j x) for the probability that the given input was generated by the given drivers.”)
Tai, Gao and Dang are combinable for the same rationale as set forth above with respect to claim 2.
In regard to claim 10, Tai and Gao teach the system of claim 1.
However, Tai and Gao do not explicitly teach wherein the trained SNN outputs an adaptive driving style prediction based on two or more sensor signals received during an execution phase.
Dang teaches wherein the trained SNN outputs an adaptive driving style prediction based on two or more sensor signals received during an execution phase. (Dang, pg. 939, Col. 1, C. paragraph 1, “However, the prediction performance can be further improved when comparing a set of maneuvers. With the assumption that the driver does not change during a driving session, we can easily collect a set of maneuvers [two or more sensor signals received] that belongs to the same driver. For example, we can define a driving session as the time the driver’s seat belt remains buckled up.”)
Tai, Gao and Dang are combinable for the same rationale as set forth above with respect to claim 2.
In regard to claim 11, Tai teaches A system for Siamese neural network (SNN) based adaptive driving style prediction, comprising: a set of two or more sensors receiving two or more sensor signals as input data; (Tai, paragraph 0005, “An objective of the present invention is to provide a gesture recognition method and a gesture recognition system using siamese neural network.“ and paragraph 0008, “receiving a first training signal from a sensor to calculate a first feature by the first neural network unit;” and paragraph 0009, “receiving a second training signal from the sensor to calculate a second feature by the second neural network unit;” examiner would like to point out that the sensor being used is a Range Doppler. This is being interpreted as 2 sensors due to the region of it being able to sense two different signals, distance and velocity.)
However, Tai does not explicitly teach a memory storing one or more instructions;
a processor executing one or more of the instructions stored on the memory to perform:
calculating a first distance between the input data and a first class of a set of anchor data using a trained SNN;
calculating a second distance between the input data and a second class of the set of anchor data using the trained SNN; and
generating an adaptive driving style prediction based on the first distance and the second distance,
wherein the trained SNN is trained based on two or more sensor signals received during a training phase, a distance-based loss for the two or more sensor signals from the training phase, and by back-propagating the distance-based loss.
Gao teaches a memory storing one or more instructions; (Gao, pg. 1618, Col. 1, paragraph 1, “The proposed tracker was implemented in Matlab2017a based on MatConvNet. All experiments were carried on a PC with 3.6 GHz Intel i7 CPU, 16 GB RAM, and an Nvidia GTX 1080Ti GPU.”)
a processor executing one or more of the instructions stored on the memory to perform: (Gao, pg. 1618, Col. 1, paragraph 1, “The proposed tracker was implemented in Matlab2017a based on MatConvNet. All experiments were carried on a PC with 3.6 GHz Intel i7 CPU, 16 GB RAM, and an Nvidia GTX 1080Ti GPU.”)
wherein the trained SNN is trained based on two or more sensor signals received during a training phase, a distance-based loss for the two or more sensor signals from the training phase, and by back-propagating the distance-based loss. (Gao, pg. 1613, Col. 2, paragraph 2, “CFNet [19] interpreted the closed-form solution correlation filter learner as a differentiable layer of a deep neural network in which the gradient can back-propagate when online tracking.” And pg. 1617, Col. 2, 4), paragraph 1, “During training process, we use a pretrained network from VGGNet [12] with the parameter, and fine-tune the Siamese network with our training sequences. We fixed the first three convolution layers and fine-tuned only the last two layers. The training dataset image pairs come from the ImageNet VID dataset [54]. The size of the exemplar and search image is same with [19]. We use stochastic gradient descent (SGD) to minimize the loss, and the mini-batch size is 16 images per iteration.”)
Tai and Gao are combinable for the same rationale as set forth above with respect to claim 1.
However, Tai and Gao do not explicitly teach calculating a first distance between the input data and a first class of a set of anchor data using a trained SNN;
calculating a second distance between the input data and a second class of the set of anchor data using the trained SNN; and
generating an adaptive driving style prediction based on the first distance and the second distance,
Dang teaches calculating a first distance between the input data and a first class of a set of anchor data using a trained SNN; (Dang, pg. 937, Col. 1, paragraph 2, “This loss function is similar to the cross entropy loss, except that it optimizes the distance between g(x1) and g(x2) instead of the prediction probability. In addition, the margin m is introduced to the loss. For computing the loss for a dissimilar pair, its distance will be clipped at m. Dissimilar pairs, whose distances are larger than or equal to m will not contribute to the loss. This loss function allows the network to learn to map the dissimilar pair to be larger than or equal to m, not forcing it to be exactly m. The distance between maneuver executions from a same person will be optimized to be zero.”)
calculating a second distance between the input data and a second class of the set of anchor data using the trained SNN; and (Dang, pg. 937, Col. 1, paragraph 2, “Here, Y is the label for the input pair (E1;E2): Y = 0 if E1 and E2 are generated by the same driver and Y = 1 otherwise. d is the distance function between two embedding vectors E1 and E2.”)
generating an adaptive driving style prediction based on the first distance and the second distance, (Dang, pg. 935, Col. 2, II. Problem Formulation, A. Driver Classification, paragraph 1, “In other words, given a sample x, a model has to predict to which driver Di does x belong to. The drivers are the classes and are fixed before the training and testing process. The output of such models are usually estimates P(Di j x) for the probability that the given input was generated by the given drivers.” And pg. 937, Col. 1, paragraph 2, “This loss function is similar to the cross entropy loss, except that it optimizes the distance between g(x1) and g(x2) instead of the prediction probability. In addition, the margin m is introduced to the loss. For computing the loss for a dissimilar pair, its distance will be clipped at m. Dissimilar pairs, whose distances are larger than or equal to m will not contribute to the loss. This loss function allows the network to learn to map the dissimilar pair to be larger than or equal to m, not forcing it to be exactly m. The distance between maneuver executions from a same person will be optimized to be zero. Formally, the contrastive loss function is defined as:…Here, Y is the label for the input pair (E1;E2): Y = 0 if E1 and E2 are generated by the same driver and Y = 1 otherwise. d is the distance function between two embedding vectors E1 and E2.” Examinre would like to point out that 2 different distances are being measured from Y to E1 and Y to E2.)
Tai, Gao and Dang are combinable for the same rationale as set forth above with respect to claim 2.
In regard to claim 17, the claim recites similar limitations as corresponding claim 11, and is rejected for similar reasons as claim 11 using similar teachings and rationale.
In regard to claim 13 and analogous claim 19, Tai, Gao and Dang teach the system of claim 11.
Tai further teaches wherein the SNN is a Siamese convolutional neural network (SCNN). (Tai, paragraph 0058, “In the above embodiments, the first neural network unit 11 and the second neural network unit 12 execute convolutional neural networks (CNNs) or recurrent neural networks (RNNs),”)
Tai, Gao and Dang are combinable for the same rationale as set forth above with respect to claim 11.
In regard to claim 14 and analogous claim 20, Tai, Gao and Dang teach the system of claim 13.
Tai further teaches wherein the SCNN includes symmetrical convolutional neural networks (CNN). (Tai, paragraph 0058, “In the above embodiments, the first neural network unit 11 and the second neural network unit 12 execute convolutional neural networks (CNNs) or recurrent neural networks (RNNs),”)
Tai, Gao and Dang are combinable for the same rationale as set forth above with respect to claim 11.
In regard to claim 15, Tai, Gao and Dang teach the system of claim 11.
Gao further teaches wherein the distance-based loss is calculated using Euclidean distance. (Gao, pg. 1618, Col. 1, paragraph 4, “The latter precision plot expresses the curve of center location error (CLE), which calculates the average Euclidean distance between the center locations of objects and the ground-truth positions of tracking sequence.”, Examiner would like to point out that using the Euclidean distance for the center location error is finding the distance between the target area and where a point really landed, which in turn is being interpreted as the loss.)
Tai, Gao and Dang are combinable for the same rationale as set forth above with respect to claim 11.
In regard to claim 16, Tai, Gao and Dang teach the system of claim 11.
Tai further teaches wherein the distance-based loss is calculated using a contrastive loss function. (Tai, paragraph 0058, “In the above embodiments, the first neural network unit 11 and the second neural network unit 12 execute convolutional neural networks (CNNs) or recurrent neural networks (RNNs), the first training signal and the second training signal are Range Doppler Image (RDI) signals, and the distance determined by the similarity analysis unit 14 is calculated by a contrastive loss function.”)
Tai, Gao and Dang are combinable for the same rationale as set forth above with respect to claim 11.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SKYLAR K VANWORMER whose telephone number is (703)756-1571. The examiner can normally be reached M-F 6:00am to 3:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached at (571) 272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.K.V./Examiner, Art Unit 2146
/USMAAN SAEED/Supervisory Patent Examiner, Art Unit 2146