Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-32 pending.
Claim Objections
Claim 26 objected to because of the following informalities: Claim 26 appears to contain a typographical error. Examiner will interpret this claim as “The NS receiver of claim 25, wherein the set of non-NS signal metrics comprises metrics from an inertial navigation system.” Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-32 are rejected under 35 U.S.C. 101 because of the following analysis.
Analysis
Step 2A, Prong 1:
Step 2A, prong 1, of the 2019 Guidance, first looks to whether the claim recites any judicial exceptions, including certain groupings of abstract ideas (i.e., mathematical concepts, certain methods of organizing human activities such as a fundamental economic practice, or mental processes). 84 Fed. Reg. at 52–54.
Claim(s) 1, 17 recite(s):
“obtaining a set of metrics associated with the received NS signal”
“using the set of metrics as input to a neural network to classify whether the received NS signal is affected by spoofing.”
Claim(s) 12, 28 recite(s):
“obtaining a set of metrics associated with the received NS signal”
“dividing the set of metrics into one or more subsets”
“using each of the one or more subsets of metrics as inputs to one or more neural networks to generate one or more intermediate outputs;”
“providing the one or more intermediate outputs to a final decision module to classify whether the received NS signal is affected by spoofing.”
The obtaining, dividing, using, and providing steps constitute mental processes that can be performed in the human mind. More specifically, these steps entail an observation, evaluation, and/or judgement that can be performed exclusively in the human mind or with the aid of pencil and paper. The neural networks and final decision modules as described in applicant’s specification do not provide any level of detail that would indicate it requires steps that could not be performed in the mind. The 2019 Guidance expressly recognizes mental evaluations and judgments as constituting patent-ineligible abstract ideas. 2019 Guidance, 84 Fed. Reg. at 52. Accordingly, this limitation recites a judicial exception to patent-eligible subject matter.
Claims 2-7, 18-23 further describe the mental processes of claim 1/17. More specifically, they further describe the metrics observed / evaluated, and can be performed exclusively in the human mind or with the aid of pencil and paper. The 2019 Guidance expressly recognizes mental evaluations and judgments as constituting patent-ineligible abstract ideas. 2019 Guidance, 84 Fed. Reg. at 52. Accordingly, this limitation recites a judicial exception to patent-eligible subject matter.
Claim(s) 8, 24 further describe(s) the mental processes of claim 1/17. More specifically, they further describe the potential evaluations and determinations made, and can be performed exclusively in the human mind or with the aid of pencil and paper. The 2019 Guidance expressly recognizes mental evaluations and judgments as constituting patent-ineligible abstract ideas. 2019 Guidance, 84 Fed. Reg. at 52. Accordingly, this limitation recites a judicial exception to patent-eligible subject matter.
Claim 9, 25 recite(s):
“obtaining a set of non-NS signal metrics”
“using the set of non-NS signal metrics as additional inputs to the neural network”
The obtaining and using steps constitute a mental process that can be performed in the human mind. More specifically, these steps entail an observation, evaluation, and/or judgement that can be performed exclusively in the human mind or with the aid of pencil and paper. The 2019 Guidance expressly recognizes mental evaluations and judgments as constituting patent-ineligible abstract ideas. 2019 Guidance, 84 Fed. Reg. at 52. The remainder of this claim element (non-italicized portion) merely describes the obtained metrics. Accordingly, this limitation recites a judicial exception to patent-eligible subject matter.
Claims 10, 26 further describe the mental process of claim 9/25. More specifically, they further describe the metrics observed / evaluated, and can be performed exclusively in the human mind or with the aid of pencil and paper. The 2019 Guidance expressly recognizes mental evaluations and judgments as constituting patent-ineligible abstract ideas. 2019 Guidance, 84 Fed. Reg. at 52. Accordingly, this limitation recites a judicial exception to patent-eligible subject matter.
Claim 11, 27 does not recite any additional judicial exceptions.
Claims 13-15, 29-31 further describe the mental processes of claim 12. More specifically, they further describe the evaluations/judgements of the final decision module and can be performed exclusively in the human mind or with the aid of pencil and paper. The final decision module, final decision neural network, and rules-based system as described in applicant’s specification do not provide any level of detail that would indicate it requires steps that could not be performed in the mind. The 2019 Guidance expressly recognizes mental evaluations and judgments as constituting patent-ineligible abstract ideas. 2019 Guidance, 84 Fed. Reg. at 52. Accordingly, this limitation recites a judicial exception to patent-eligible subject matter.
Claim(s) 16, 32 further describe(s) the mental processes of claim 12/28. More specifically, they further describe the potential evaluations and determinations made, and can be performed exclusively in the human mind or with the aid of pencil and paper. The 2019 Guidance expressly recognizes mental evaluations and judgments as constituting patent-ineligible abstract ideas. 2019 Guidance, 84 Fed. Reg. at 52. Accordingly, this limitation recites a judicial exception to patent-eligible subject matter.
Step 2A, Prong 2:
Step 2A, prong 2, of the 2019 Guidance, next analyzes whether the claim recites additional elements that individually or in combination integrate the judicial exception into a practical application. 2019 Guidance, 84 Fed. Reg. at 53–55. The 2019 Guidance identifies considerations indicative of whether an additional element or combination of elements integrate the judicial exception into a practical application, such as an additional element reflecting an improvement in the functioning of a computer or an improvement to other technology or technical field. Id. at 55; MPEP § 2106.05(a).
Claim(s) 1, 12 recite(s):
“receiving the NS signal”
The receiving step corresponds to mere data gathering, insignificant pre-solution activity. Extra-solution activity does not integrate a judicial exception into a practical application. MPEP 2106.05(g).
Claim(s) 17 recite(s):
A navigation system (NS) receiver,
an antenna configured to receive a NS signal;
a module
a neural network
The receiving configuration corresponds to mere data gathering, insignificant pre-solution activity. Extra-solution activity does not integrate a judicial exception into a practical application. MPEP 2106.05(g). The receiver, antenna, and module, correspond to generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h). The neural network corresponds to a computer used as a tool to implement an abstract idea. MPEP 2106.05(f). Therefore, the judicial exception is not integrated into a practical application.
Claim(s) 28 recite(s):
A navigation system (NS) receiver,
an antenna configured to receive a NS signal;
a module
a neural network
a final decision module
The receiving configuration corresponds to mere data gathering, insignificant pre-solution activity. Extra-solution activity does not integrate a judicial exception into a practical application. MPEP 2106.05(g). The receiver, antenna, and module, correspond to generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h). The neural network and final decision module correspond to computers used as a tool to implement an abstract idea. MPEP 2106.05(f). Therefore, the judicial exception is not integrated into a practical application.
Claims 2-10, 18-26 do not recite any additional elements.
Claims 11, 27 further describe the received signal of claim 1/17, and therefore further describe the mere data gathering, insignificant pre-solution activity discussed above with respect to claim 1/17. Extra-solution activity does not integrate a judicial exception into a practical application. MPEP 2106.05(g).
Claims 13-16, 29-32 do not recite any additional limitations.
Step 2B:
Under step 2B of the 2019 Guidance, we next analyze whether the claim adds any specific limitations beyond the judicial exception that, either alone or as an ordered combination, amount to more than “well-understood, routine, conventional” activity in the field. 84 Fed. Reg. at 56; MPEP § 2106.05(d).
As discussed above, the additional limitations of claim(s) 1 and 12 correspond to extra-solution activity or general linking to a particular field of use. Therefore, the additional elements recited do not, as an ordered combination, amount to more than “well-understood, routine, conventional” activity in the field.
As discussed above, the additional limitations of claim(s) 17, 28 correspond to extra-solution activity, general linking to a particular field of use, or computers used as tools to implement an abstract idea. Therefore, the additional elements recited do not, as an ordered combination, amount to more than “well-understood, routine, conventional” activity in the field.
Claims 2-10, 18-26 do not recite any additional elements.
As discussed above, the additional limitations of claim(s) 11, 27 correspond to extra-solution activity. Therefore, the additional elements recited do not, as an ordered combination, amount to more than “well-understood, routine, conventional” activity in the field.
Claims 13-16, 29-32 do not recite any additional limitations.
Therefore, claims 1-32 are rejected under 35 U.S.C. § 101 as being directed to patent-ineligible subject matter.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 3, 11, 17, 19, 27 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US 20250012928 A1 to Lall.
Regarding claim 1,
US 20250012928 A1 to Lall teaches:
A computer implemented method for detecting spoofing of a navigation system (NS) signal, the method comprising the steps of:
receiving the NS signal; (Fig. 3; [0023] – “In step 301, the receiver 201 receives a navigation signal.”)
obtaining a set of metrics associated with the received NS signal; ([0023, 29] – “In step 302, the receiver 201 pre-processes the received navigation signal, which can comprise of normalizing and standardizing data values of the received navigation signal (i.e., the plurality of features of the received navigation signal), such that the data values are in a pre-defined range with minimal deviation, and configuring the normalized and standardized navigation signal into a plurality of channels, wherein each feature can correspond to a channel.”) and
using the set of metrics as input to a neural network to classify whether the received NS signal is affected by spoofing. ([0023, 30] – “In step 303, the receiver 201 checks if the pre-processed navigation signal is a genuine navigation signal or a spoofed navigation signal using the time series based neural network (such as, but not limited to, LSTM, RNN)… The time series based neural network can convert the normalized features (from the pre-processed navigation signal) to a meaningful vector”)
Regarding claim 3,
Lall teaches the invention as claimed and discussed above.
Lall further teaches:
The computer implemented method of claim 1 wherein the set of metrics comprises total input power. ([0022] – “examples of the features can be… abs_P”)
Regarding claim 11,
Lall teaches the invention as claimed and discussed above.
Lall further teaches:
The computer implemented method of claim 1 wherein the NS signal comprises a global navigation satellite system (GNSS) signal. ([0020] – “Navigation systems, as disclosed herein, can be any signal that can be used for navigation. Examples of the navigation systems can be, but not limited to, Global Positioning Systems (GPS), Global Navigation Satellite System (GLONASS”)
Regarding claim(s) 17, 19, 27,
Claim(s) 17, 19, 27, is/are claims corresponding to claim(s) 1, 3, and 11 respectively. Accordingly, the Examiner’s remarks and application of the prior art with respect to claim(s) 17, 19, 27 are substantially the same as those made above with respect to claim(s) 1, 3, and 11.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 2, 4, 18, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20250012928 A1 to Lall in view of US 20230007488 A1 to Mallya.
Regarding claim 2,
Lall teaches the invention as claimed and discussed above.
Lall does not explicitly teach the additional elements of the claim.
Mallya teaches:
The computer implemented method of claim 1 wherein the set of metrics comprises power spectral density. ([0036] – “That is, the one or more models 518 may be trained to evaluate received signals on the temporal axis and/or the spatial axis in order to compare power spectral density of the received signals to power spectral density data of a plurality of prior observations. This is done in order to determine whether the base station 502 is experiencing a spoofing attack.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied Mallya’s known technique to Lall’s known method ready for improvement to yield predictable results. Such a finding is proper because (1) Lynch teaches a base method of pre-processing a received signal in order to determine features/metrics associated with the signal, and then inputting those features/metrics into a neural network for determining whether the signal is affected by spoofing; (2) Mallya teaches specific features of received signals that can be used by a neural network to determine spoofing; (3) At [0022], Lynch teaches example features, e.g., features relating to power, to be input to the model. One of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in a more accurate system; and (4) no additional findings based on the Graham factual inquiries are necessary, in view of the facts of the case under consideration, to explain a conclusion of obviousness (See MPEP 2143).
Regarding claim 4,
Lall teaches the invention as claimed and discussed above.
Lall does not explicitly teach the additional elements of the claim.
Mallya teaches:
The computer implemented method of claim 1 wherein the set of metrics comprises carrier to noise ratio of the received NS signal. (Figs. 6, 7; [0039-40] – “a carrier power to noise power ratio depends on a carrier/global navigation satellite system (GNSS) signal power that the receiver sees at an antenna of the base station 702, for example. This information may be used to determine whether received signals are spoofed signals or not.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied Mallya’s known technique to Lall’s known method ready for improvement to yield predictable results. Such a finding is proper because (1) Lynch teaches a base method of pre-processing a received signal in order to determine features/metrics associated with the signal, and then inputting those features/metrics into a neural network for determining whether the signal is affected by spoofing; (2) Mallya teaches specific features of received signals that can be used by a neural network to determine spoofing; (3) At [0022], Lynch teaches example features, e.g., features relating to the carrier signal, to be input to the model. One of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in a more accurate system; and (4) no additional findings based on the Graham factual inquiries are necessary, in view of the facts of the case under consideration, to explain a conclusion of obviousness (See MPEP 2143).
Regarding claim(s) 18, 20,
Claim(s) 18, 20 is/are claims corresponding to claim(s) 2, 4 respectively. Accordingly, the Examiner’s remarks and application of the prior art with respect to claim(s) 18, 20 are substantially the same as those made above with respect to claim(s) 2, 4.
Claim(s) 5, 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20250012928 A1 to Lall in view of US 20110068973 A1 to Humphreys.
Regarding claim 5,
Lall teaches the invention as claimed and discussed above.
Lall does not explicitly teach the additional elements of the claim.
Humphreys teaches:
The computer implemented method of claim 1 wherein the set of metrics comprises signal quality monitoring of the received NS signal. ([claim 21] – “The GNSS signal assimilator of claim 19, further comprising an anti-spoofing module operable to continuously analyze data received at the GNSS signal receiver to detect potential spoofing via… signal quality deterioration.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied Humphreys’ known technique to Lall’s known method ready for improvement to yield predictable results. Such a finding is proper because (1) Lynch teaches a base method of pre-processing a received signal in order to determine features/metrics associated with the signal, and then inputting those features/metrics into a neural network for determining whether the signal is affected by spoofing; (2) Humphreys teaches specific features of received signals that can be used to determine spoofing; (3) At [0022], Lynch teaches example features, e.g., features relating to signal quality, to be input to the model. One of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in a more accurate system; and (4) no additional findings based on the Graham factual inquiries are necessary, in view of the facts of the case under consideration, to explain a conclusion of obviousness (See MPEP 2143).
Regarding claim(s) 21,
Claim(s) 21 is/are claims corresponding to claim(s) 5 respectively. Accordingly, the Examiner’s remarks and application of the prior art with respect to claim(s) 21 are substantially the same as those made above with respect to claim(s) 5.
Claim(s) 6, 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20250012928 A1 to Lall in view of US 20100127923 A1 to Harper.
Regarding claim 6,
Lall teaches the invention as claimed and discussed above.
Lall does not explicitly teach the additional elements of the claim.
Harper teaches:
The computer implemented method of claim 1 wherein the set of metrics comprises a clock bias of the received NS signal. ([0037] – “Several attributes in satellite measurements may be utilized as a source for detecting location spoofing, such as, but not limited to… clock bias,”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied Harper’s known technique to Lall’s known method ready for improvement to yield predictable results. Such a finding is proper because (1) Lynch teaches a base method of pre-processing a received signal in order to determine features/metrics associated with the signal, and then inputting those features/metrics into a neural network for determining whether the signal is affected by spoofing; (2) Harper teaches specific features of received signals that can be used to determine spoofing; (3) At [0022], Lynch teaches example features, e.g., features relating to the carrier signal, to be input to the model. One of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in a more accurate system; and (4) no additional findings based on the Graham factual inquiries are necessary, in view of the facts of the case under consideration, to explain a conclusion of obviousness (See MPEP 2143).
Regarding claim(s) 22,
Claim(s) 22 is/are claims corresponding to claim(s) 6 respectively. Accordingly, the Examiner’s remarks and application of the prior art with respect to claim(s) 22 are substantially the same as those made above with respect to claim(s) 6.
Claim(s) 7, 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20250012928 A1 to Lall in view of US 20210255331 A1 to Broumandan.
Regarding claim 7,
Lall teaches the invention as claimed and discussed above.
Lall does not explicitly teach the additional elements of the claim.
Broumandan teaches:
The computer implemented method of claim 1 wherein the set of metrics comprises a cross-ambiguity function of the received NS signal. ([0007] – “a receiver may search over the cross-ambiguity function range to identify the number of correlation peaks above the detection threshold… A determination is made that a spoofing attack is occurring if PPPM monitoring and/or the cross-ambiguity function monitoring technique detects a spoofing attack.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied Broumandan’s known technique to Lall’s known method ready for improvement to yield predictable results. Such a finding is proper because (1) Lynch teaches a base method of pre-processing a received signal in order to determine features/metrics associated with the signal, and then inputting those features/metrics into a neural network for determining whether the signal is affected by spoofing; (2) Broumandan teaches specific features of received signals that can be used to determine spoofing; (3) At [0022], Lynch teaches example features, e.g., features relating to the received signal, to be input to the model. One of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in a more accurate system; and (4) no additional findings based on the Graham factual inquiries are necessary, in view of the facts of the case under consideration, to explain a conclusion of obviousness (See MPEP 2143).
Regarding claim(s) 23,
Claim(s) 23 is/are claims corresponding to claim(s) 7 respectively. Accordingly, the Examiner’s remarks and application of the prior art with respect to claim(s) 23 are substantially the same as those made above with respect to claim(s) 7.
Claim(s) 8, 12-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20250012928 A1 to Lall in view of US 20210271259 A1 to Karpathy.
Regarding claim 8,
Lall teaches the invention as claimed and discussed above.
Lall further teaches:
The computer implemented method of claim 1 wherein classifying whether the received NS signal is affected by spoofing ([0023, 30] – “In step 303, the receiver 201 checks if the pre-processed navigation signal is a genuine navigation signal or a spoofed navigation signal using the time series based neural network (such as, but not limited to, LSTM, RNN)… The time series based neural network can convert the normalized features (from the pre-processed navigation signal) to a meaningful vector”)
Karpathy teaches:
The computer implemented method of claim 1 wherein classifying whether the received signal is affected by a particular use case (Fig. 4; [0080] – “At 415, the identified sensor data is transmitted” [0071] – “Examples of the use cases include identifying an on ramp, a tunnel exit, an obstacle in the road, a fork in the road, specific types of vehicles, etc… In some embodiments, one or more trigger classifiers and parameters are used to identify one or more different use cases”) further comprises determining a type of the particular use case occurring. (Fig. 4; [0078] – “At 413, a determination is made whether the classifier score exceeds a threshold and whether required trigger conditions are met… In some embodiments, additional trigger required conditions may be applied after the classifier score is determined. For example, the determined classifier score may be compared to previously determined classifier scores within a certain time window. As another example, the determined classifier score may be compared to previously determined scores from the same location. As another example, sensor data may be required to meet both a time condition and a location condition. For example, only sensor data with the highest score from the same location within the last 10 minutes may be retained as potential data.” Examiner notes that a use case of a determined “type” may correspond to, e.g., use case with high classification score, use case from a certain time/location, very likely positive/negative example of use case, very unlikely positive/negative example of use case, etc.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied Karpathy’s known technique to Lall’s known method ready for improvement to yield predictable results. Such a finding is proper because (1) Lynch teaches a base method of pre-processing a received signal in order to determine features/metrics associated with the signal, and then inputting those features/metrics into a neural network for classifying whether the signal is affected by spoofing; (2) Karpathy teaches a neural network with intermediate outputs and a trigger classifier for determining a classifier score, wherein the intermediate output is the input to the trigger classifier. Karpathy further teaches that these systems are for analyzing pre-processed sensor data; (3) One of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in a more accurate system; and (4) no additional findings based on the Graham factual inquiries are necessary, in view of the facts of the case under consideration, to explain a conclusion of obviousness (See MPEP 2143).
Regarding claim 12,
US 20250012928 A1 to Lall teaches:
A computer implemented method for detecting spoofing of a navigation system (NS) signal, the method comprising the steps:
receiving the NS signal; (Fig. 3; [0023] – “In step 301, the receiver 201 receives a navigation signal.”)
obtaining a set of metrics associated with the received NS signal; ([0023, 29] – “In step 302, the receiver 201 pre-processes the received navigation signal, which can comprise of normalizing and standardizing data values of the received navigation signal (i.e., the plurality of features of the received navigation signal), such that the data values are in a pre-defined range with minimal deviation, and configuring the normalized and standardized navigation signal into a plurality of channels, wherein each feature can correspond to a channel.”) and
dividing the set of metrics into one or more subsets; ([0023, 29] – “In step 302, the receiver 201 pre-processes the received navigation signal, which can comprise of normalizing and standardizing data values of the received navigation signal (i.e., the plurality of features of the received navigation signal), such that the data values are in a pre-defined range with minimal deviation, and configuring the normalized and standardized navigation signal into a plurality of channels, wherein each feature can correspond to a channel.”)
using each of the one or more subsets of metrics as inputs to one or more neural networks to generate one or more (lined through limitations correspond to limitations not taught by reference) outputs; ([0023, 30] – “In step 303, the receiver 201 checks if the pre-processed navigation signal is a genuine navigation signal or a spoofed navigation signal using the time series based neural network (such as, but not limited to, LSTM, RNN)… The time series based neural network can convert the normalized features (from the pre-processed navigation signal) to a meaningful vector”) and
([0023, 30] – “In step 303, the receiver 201 checks if the pre-processed navigation signal is a genuine navigation signal or a spoofed navigation signal using the time series based neural network (such as, but not limited to, LSTM, RNN)… The time series based neural network can convert the normalized features (from the pre-processed navigation signal) to a meaningful vector”)
US 20210271259 A1 to Karpathy teaches:
generating one or more intermediate outputs by a neural network ([0072] – “a deep learning analysis of an autonomous driving system is initiated with sensor data captured by sensors attached to a vehicle. In some embodiments, the initiated deep learning analysis includes pre-processing the sensor data… In some embodiments, the output of the first layer and any intermediate layer is considered an intermediate output. In various embodiments, an intermediate output is the output of a layer of a machine learning model other than the final output (e.g., the output of the final layer of the model).”)
providing the one or more intermediate outputs to a final decision module ([0027] – “The trigger classifier is applied to the intermediate output of the same layer of the deployed deep learning system to determine a classifier score. In some embodiments, the input to the trigger classifier is the intermediate output of a layer of a convolution neural network (CNN) applied to sensor data”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied Karpathy’s known technique to Lall’s known method ready for improvement to yield predictable results. Such a finding is proper because (1) Lynch teaches a base method of pre-processing a received signal in order to determine features/metrics associated with the signal, and then inputting those features/metrics into a neural network for classifying whether the signal is affected by spoofing; (2) Karpathy teaches a neural network with intermediate outputs and a trigger classifier for determining a classifier score, wherein the intermediate output is the input to the trigger classifier. Karpathy further teaches that these systems are for analyzing pre-processed sensor data; (3) One of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in a more accurate system; and (4) no additional findings based on the Graham factual inquiries are necessary, in view of the facts of the case under consideration, to explain a conclusion of obviousness (See MPEP 2143).
Regarding claim 13,
Lall in view of Karpathy teaches the invention as claimed and discussed above.
Karpathy further teaches:
The computer implemented method of claim 12 wherein the final decision module comprises a final decision neural network. ([0064] – “At 305, a trigger classifier is trained. In some embodiments, a trigger classifier is a support vector machine or small neural network.,”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied Karpathy’s known technique to Lall’s known method ready for improvement to yield predictable results. Such a finding is proper because (1) Lynch teaches a base method of pre-processing a received signal in order to determine features/metrics associated with the signal, and then inputting those features/metrics into a neural network for classifying whether the signal is affected by spoofing; (2) Karpathy teaches a neural network with intermediate outputs and a trigger classifier for determining a classifier score, wherein the intermediate output is the input to the trigger classifier. Karpathy further teaches that these systems are for analyzing pre-processed sensor data; (3) One of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in a more accurate system; and (4) no additional findings based on the Graham factual inquiries are necessary, in view of the facts of the case under consideration, to explain a conclusion of obviousness (See MPEP 2143).
Regarding claim 14,
Lall in view of Karpathy teaches the invention as claimed and discussed above.
Karpathy further teaches:
The computer implemented method of claim 12 wherein the final decision module comprises a rules-based system. ([0028] – “In some embodiments, trigger properties such as filters are applied to the trigger classifier to determine the conditions that must be met to proceed with determining a classifier score, the circumstances under which the classifier score exceeds a threshold, and/or the conditions necessary to retain the sensor data.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied Karpathy’s known technique to Lall’s known method ready for improvement to yield predictable results. Such a finding is proper because (1) Lynch teaches a base method of pre-processing a received signal in order to determine features/metrics associated with the signal, and then inputting those features/metrics into a neural network for classifying whether the signal is affected by spoofing; (2) Karpathy teaches a neural network with intermediate outputs and a trigger classifier for determining a classifier score, wherein the intermediate output is the input to the trigger classifier. Karpathy further teaches that these systems are for analyzing pre-processed sensor data; (3) One of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in a more accurate system; and (4) no additional findings based on the Graham factual inquiries are necessary, in view of the facts of the case under consideration, to explain a conclusion of obviousness (See MPEP 2143).
Regarding claim 15,
Lall in view of Karpathy teaches the invention as claimed and discussed above.
Karpathy further teaches:
The computer implemented method of claim 12 wherein the final decision module comprises a final decision neural network ([0064] – “At 305, a trigger classifier is trained. In some embodiments, a trigger classifier is a support vector machine or small neural network.,”) and a rules-based system. ([0028] – “In some embodiments, trigger properties such as filters are applied to the trigger classifier to determine the conditions that must be met to proceed with determining a classifier score, the circumstances under which the classifier score exceeds a threshold, and/or the conditions necessary to retain the sensor data.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied Karpathy’s known technique to Lall’s known method ready for improvement to yield predictable results. Such a finding is proper because (1) Lynch teaches a base method of pre-processing a received signal in order to determine features/metrics associated with the signal, and then inputting those features/metrics into a neural network for classifying whether the signal is affected by spoofing; (2) Karpathy teaches a neural network with intermediate outputs and a trigger classifier for determining a classifier score, wherein the intermediate output is the input to the trigger classifier. Karpathy further teaches that these systems are for analyzing pre-processed sensor data; (3) One of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in a more accurate system; and (4) no additional findings based on the Graham factual inquiries are necessary, in view of the facts of the case under consideration, to explain a conclusion of obviousness (See MPEP 2143).
Regarding claim 16,
Lall in view of Karpathy teaches the invention as claimed and discussed above.
Lall further teaches:
The computer implemented method of claim 15 further comprising classifying the received NS signal as ([0023, 30] – “In step 303, the receiver 201 checks if the pre-processed navigation signal is a genuine navigation signal or a spoofed navigation signal using the time series based neural network (such as, but not limited to, LSTM, RNN)… The time series based neural network can convert the normalized features (from the pre-processed navigation signal) to a meaningful vector”)
Karpathy further teaches:
classifying the received NS signal as uncertain (Fig. 4; [0078] – “In the event the classifier score does not exceed the threshold value, processing continues to 403.”) between the final decision neural network ([0068] – “classifier score determined by the trained trigger classifier… In some embodiments, a score of −1.0 is a negative example and a score of 1.0 is a positive example. Classifier scores lie between −1.0 and 1.0 to indicate how likely the raw input is a positive or negative example of the targeted use case.” [0077] – “At 411, a trigger classifier score is determined. For example, a trigger classifier score is determined by applying the trigger classifier to the intermediate results of the neural network… a particular range such as between −1 and +1 may be used to represent the likelihood the sensor data is a negative or positive example of the targeted use case.”) and the rules-based system. ([0028], [0068] – “a threshold may be determined that is compared to the classifier score determined by the trained trigger classifier. Using a threshold of 0.5, a classifier score of 0.7 indicates the data is likely representative of a tunnel exit.” [0078] – “At 413, a determination is made whether the classifier score exceeds a threshold” Examiner notes that uncertain spoofing is determined when a “no” decision is made at step 413 based on, e.g., a classifier score of 0.3 and threshold of 0.5. A classifier score greater than zero corresponds to a positive example according to the classifier. However, only a classifier score greater than 0.5 corresponds to a positive example according to the rules-based system. Therefore, a classifier score of 0.3 will be determined “uncertain” at step 413 and not retained for training when the final decision neural network and rules-based system disagree.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied Karpathy’s known technique to Lall’s known method ready for improvement to yield predictable results. Such a finding is proper because (1) Lynch teaches a base method of pre-processing a received signal in order to determine features/metrics associated with the signal, and then inputting those features/metrics into a neural network for classifying whether the signal is affected by spoofing; (2) Karpathy teaches a neural network with intermediate outputs and a trigger classifier for determining a classifier score, wherein the intermediate output is the input to the trigger classifier. Karpathy further teaches that these systems are for analyzing pre-processed sensor data; (3) One of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in a more accurate system; and (4) no additional findings based on the Graham factual inquiries are necessary, in view of the facts of the case under consideration, to explain a conclusion of obviousness (See MPEP 2143).
Regarding claim(s) 24, 28-32,
Claim(s) 24, 28-32, is/are claims corresponding to claim(s) 8, 12-16 respectively. Accordingly, the Examiner’s remarks and application of the prior art with respect to claim(s) 24, 28-32, are substantially the same as those made above with respect to claim(s) 8, 12-16.
Claim(s) 9-10, 25-26 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20250012928 A1 to Lall in view of US 20220308237 A1 to Clausen.
Regarding claim 9,
Lall teaches the invention as claimed and discussed above.
The computer implemented method of claim 1 further comprising:
obtaining a set of (lined through limitations correspond to limitations not taught by reference) additional metrics
using the set ofadditional metrics as additional inputs to the neural network ([0022-23, 30] – “plurality of features of the received navigation signal… use a time series based neural network… mapping an impact of each of the plurality of features”)
Lall does not explicitly teach the additional elements of the claim.
Clausen teaches:
obtaining a set of non-NS signal metrics; ([0079] – “The GNSS receiver module illustrated in FIG. 1 further comprises an inertial measurement unit (IMU) 130 including sensors configured to generate inertial measurements.”) and
using the set of non-NS signal metrics as additional inputs to classify whether a spoofing attack is present ([0079] – “The inertial measurements generated by the IMU and the measurements provided by the GNSS receiver are used by the processor to determine whether the vehicle is being subjected to a spoofing attack.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied Clausen’s known technique to Lall’s known method ready for improvement to yield predictable results, i.e., using Clausen’s non-NS signal metrics as additional inputs into Lall’s neural network. Such a finding is proper because (1) Lynch teaches a base method of inputting GNSS features/metrics into a neural network for determining whether the signal is affected by spoofing; (2) Clausen teaches a specific technique of combining inertial measurements with GNSS measurements to determine spoofing (see, e.g., Clausen [0100]); (3) One of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in a more accurate system (see, e.g., Clausen [0100]); and (4) no additional findings based on the Graham factual inquiries are necessary, in view of the facts of the case under consideration, to explain a conclusion of obviousness (See MPEP 2143).
Regarding claim 10,
Lall in view of Clausen teaches the invention as claimed and discussed above.
Clausen further teaches:
The computer implemented method of claim 9, wherein the set of non-NS signal metrics comprises metrics from an inertial navigation system. ([0079] – “The inertial measurements generated by the IMU and the measurements provided by the GNSS receiver are used by the processor to determine whether the vehicle is being subjected to a spoofing attack.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied Clausen’s known technique to Lall’s known method ready for improvement to yield predictable results, i.e., using Clausen’s non-NS signal metrics as additional inputs into Lall’s neural network. Such a finding is proper because (1) Lynch teaches a base method of inputting GNSS features/metrics into a neural network for determining whether the signal is affected by spoofing; (2) Clausen teaches a specific technique of combining inertial measurements with GNSS measurements to determine spoofing (see, e.g., Clausen [0100]); (3) One of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in a more accurate system (see, e.g., Clausen [0100]); and (4) no additional findings based on the Graham factual inquiries are necessary, in view of the facts of the case under consideration, to explain a conclusion of obviousness (See MPEP 2143).
Regarding claim(s) 25-26,
Claim(s) 25-26 is/are claims corresponding to claim(s) 9-10 respectively. Accordingly, the Examiner’s remarks and application of the prior art with respect to claim(s) 25-26 are substantially the same as those made above with respect to claim(s) 9-10.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIANA CROSS whose telephone number is (571)272-8721. The examiner can normally be reached Mon-Fri 9am-5pm Pacific time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Kelleher can be reached on (571) 272-7753. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JULIANA CROSS/Examiner, Art Unit 3648
/William Kelleher/Supervisory Patent Examiner, Art Unit 3648