Prosecution Insights
Last updated: April 17, 2026
Application No. 18/432,261

SYSTEM AND METHOD FOR IDENTIFYING A STALKING VEHICLE

Non-Final OA §103§112
Filed
Feb 05, 2024
Examiner
POTTS, RYAN PATRICK
Art Unit
2672
Tech Center
2600 — Communications
Assignee
unknown
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
189 granted / 235 resolved
+18.4% vs TC avg
Strong +37% interview lift
Without
With
+36.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
29 currently pending
Career history
264
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
39.2%
-0.8% vs TC avg
§102
20.6%
-19.4% vs TC avg
§112
27.9%
-12.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 235 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement In nonprovisional applications, applicants and other individuals substantively involved with the preparation and/or prosecution of the application have a duty to submit to the Office information which is material to patentability as defined in 37 CFR 1.56. No IDS has been filed. Applicant is encouraged to submit an IDS listing any known prior art that is material to patentability, for example, any algorithms or systems for classifying following vehicles as stalking vehicles or tracking vehicles along public roads using a centralized traffic monitoring system that wirelessly communicates with vehicles driving on the roads and visible to a distributed network of roadside cameras and/or edge devices, e.g., prior art that discloses applications of vehicle-to-infrastructure (V2I) communications. Drawings The drawings are objected to because (1) the letter n in the word “Comparison” within the phrase “Turn Comparison” in step 140 of FIG. 4 should be on the same line as “Compariso”. The Examiner suggests making the diamond-shaped decision block larger to accommodate the full word of “Comparison” and (2) the drawings do not use the abbreviation “FIG.” and do not use capital letters to denote partial views. See 37 C.F.R. 1.84(u)(1). For example, instead of “Fig. 3a”, “FIG. 3A” must be used. The description/reference to the drawings should be updated in the specification wherever applicable to be consistent with the drawings. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following informalities: At page 18, line 20, “the methodology of Turn Comparison” is confusing because of the capitalization of “Turn Comparison”. An algorithm or methodology called “Turn Comparison” could not be found. The phrase “Turn Comparison” should be changed to “turn comparison” unless Applicant intended to refer to a specific algorithm or known method in the prior art. In which case, such known algorithm or known method should be listed on an IDS and properly attributed in the specification. The word alert is used in both lowercase and uppercase forms, i.e., “if an Alert has occurred ... where an alert message is” on page 17. There does not appear to be a reason to capitalize the word “alert” unless it begins a sentence. Accordingly, “Alert” at line 8 of page 17 and line 11 of page 19 should be written as “alert”. Appropriate correction is required. Claim Objections Claim 25 is objected to because of the following informalities: “comparing in real time said final image data to respective image data corresponding to respective image data corresponding to 3 prior detected turns” should be changed to “comparing in real time said final image data to respective image data corresponding to 3 prior detected turns” for clarity. Appropriate correction is required. Claim Interpretation Under a broadest reasonable interpretation (BRI), words of the claim must be given their plain meaning, unless such meaning is inconsistent with the specification. The plain meaning of a term means the ordinary and customary meaning given to the term by those of ordinary skill in the art at the time of the invention. MPEP 2173.01, subsection I. The phrase “identifying data” as recited in claims 1-4, 6, 7, 17-20, and 26, is not defined in the specification. Under a BRI of “identifying data”, the plain meaning is data for identifying. Since claim 18 uses the phrase “image data” instead of “identifying data”, the two phrases are considered equivalents. Under a BRI of the phrase “algorithms configured for machine learning (ML) and artificial intelligence (AI)” as recited in claims 2, 7, 10, 14, 19, 20 and 23, the plain meaning of “configured” in the context of computers is to change a computer or other device so that it can be used in a particular way.1 Therefore, the plain meaning of the phrase “configured for” in the context of the claims is algorithms that have changed a computer (e.g., by being executed) so that the computer can (i.e., is capable) be used in a machine learning or artificial intelligence paradigm. Such an algorithm is therefore not required to be explicitly defined or described as a strictly “ML” algorithm or “AI” algorithm. The only requirement is that the algorithm is capable of operating in a process or system that is designed to perform an ML task or an AI task. Under a BRI of the phrase “streets and roadways” used to describe the “traffic surveillance system” as recited in claims 6, 20 and 26, the plain meaning of “street” is a public thoroughfare, usually paved, in a village, town, or city, including the sidewalk or sidewalks, or the roadway of such a thoroughfare, as distinguished from the sidewalk2, and the plain meaning of “roadway” (singular representative of “roadways”) is the part of a road over which vehicles travel3. It is noted that “streets”, “roadways”, and “roads”, without further context, are synonyms.4 It is also noted that the only place where “street” or “streets” appear in the application is in original claims 6, 20 and 26. Since Applicant has not provided a special definition of “streets” or “roadways”, the plain meaning of each term is the same: roads along which automobiles pass. Thus, the phrase “automobiles that pass a plurality of cameras distributed along streets and roadways” in claims 6, 20 and 26 means: automobiles that pass a plurality of cameras distributed along a plurality of roads. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f): (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) except as otherwise indicated in an Office action. The following is a list of non-structural generic placeholders that may invoke 35 U.S.C. 112(f): "mechanism for," "module for," "device for," "unit for," "component for," "element for," "member for," "apparatus for," "machine for," or "system for." (emphasis added). MPEP 2181, subsection I(A). This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) because the claim limitation uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “said computing device, executing said programming, is operable to perform the steps of comparing…” in claims 18-25. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1-26 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Claims 1-23 and 26 are rejected under 35 U.S.C. 112(b) as being incomplete for omitting essential steps, such omission amounting to a gap between the steps. See MPEP § 2172.01. The omitted steps are: comparing the collected image data with an automobile database using algorithms configured for machine learning (ML) and artificial intelligence (AI) to determine and generate identifying data points (pg. 10, “In a critical aspect, the computing device 20, when executing programming related to the step of comparing the collected identifying data to data from the automobile database, is operable to use algorithms that utilize the paradigms known as machine learning (ML) and artificial intelligence (AI) to determine and generate the identifying data points.”; emphasis added); accessing a traffic surveillance system in real-time via the Internet (pg. 11, “In another critical aspect, the computing device, executing respective programming, is capable of accessing a traffic surveillance system 50 (also referred to as a traffic surveillance system) in real-time via the Internet 11.”; emphasis added); and accessing a traffic mapping system in real-time and via the Internet to determine a statistical probability indicative of a likelihood that two drivers would choose the same route having a common destination (pgs. 13-14, “In another critical aspect, the computing device 20, executing respective programming, is capable of accessing a traffic mapping system 60 in real-time and via the Internet 11. ... Accordingly, the computing device 20 may be configured to determine a statistical probability indicative of a likelihood that two drivers would choose the same route having a common destination, considering elements such as the length of the chosen route, number of turns or connecting roadways along said selected route, etc.” emphasis added). Claims 18-25 are rejected under 35 U.S.C. 112(b) as being incomplete for omitting essential elements, such omission amounting to a gap between the elements. See MPEP § 2172.01. The omitted elements are: a computing device executing the comparison of collected image data with an automobile database using algorithms configured for machine learning (ML) and artificial intelligence (AI) to determine and generate identifying data points (pg. 10, “In a critical aspect, the computing device 20, when executing programming related to the step of comparing the collected identifying data to data from the automobile database, is operable to use algorithms that utilize the paradigms known as machine learning (ML) and artificial intelligence (AI) to determine and generate the identifying data points.”; emphasis added); the computing device accessing a traffic surveillance system in real-time via the Internet (pg. 11, “In another critical aspect, the computing device, executing respective programming, is capable of accessing a traffic surveillance system 50 (also referred to as a traffic surveillance system) in real-time via the Internet 11.”; emphasis added); and the computing device accessing a traffic mapping system in real-time and via the Internet to determine a statistical probability indicative of a likelihood that two drivers would choose the same route having a common destination (pgs. 13-14, “In another critical aspect, the computing device 20, executing respective programming, is capable of accessing a traffic mapping system 60 in real-time and via the Internet 11. ... Accordingly, the computing device 20 may be configured to determine a statistical probability indicative of a likelihood that two drivers would choose the same route having a common destination, considering elements such as the length of the chosen route, number of turns or connecting roadways along said selected route, etc.” emphasis added). The omitted steps and elements described above are critical because the specification states they are critical. Additionally, their criticality arises from the repeated emphasis in the specification and the claims that the operations are performed in “real-time”, which could not otherwise be accomplished without the incorporation of all the admittedly critical steps and elements. Claim 1 recites, in part, “comparing said collected identifying data with an automobile database accessed via the Internet containing automobile identification characteristics so as to generate identifying data that includes all automobiles that exhibit said collected identifying data” (emphasis added). Claim 18 recites, in part, “comparing said collected image data with an automobile database accessed via the Internet containing automobile identification characteristics so as to generate identifying data that includes all automobiles that exhibit said collected image data” (emphasis added). Claim 26 recites, in part, “comparing said collected identifying data with an automobile database accessed via the Internet containing automobile identification characteristics so as to generate identifying data that includes all automobiles that exhibit said collected identifying data” (emphasis added). The phrase “all automobiles that exhibit said collected identifying data” implies that Applicant is attempting to claim identifying any and all possible vehicles that exhibit or are indicated by the collected data from the rear-facing camera. Such an interpretation is not enabled by the specification and would necessitate a rejection under 35 U.S.C. 112(a). However, to avoid that rejection in the interest of compact prosecution, based on the specification at page 9, “identifying data indicative of all automobiles that exhibit the identified characteristics” means, in one embodiment, “the computing device 20, under program control, may access the automobile identification database 40 via the Internet 11 and submit images collected/captured by the imaging assembly 30 for comparison and identification. It is understood that the imaging assembly 30 may be capable of either individually or in cooperation with program code being executed by the computing device 20—to represent following automobiles numerically, such as using pixel and pixel combinations indicative of color data, shape data, grill pattern data, bumper data, headlight data, windshields data, and the like. It is with this input data that the automobile identification database 40 may be configured to identify a make and model and color that matches the submitted pixel inputs.” (emphasis added). Thus, according to the specification, and for purposes of applying prior art, “all automobiles that exhibit said collected identifying data” means that at a minimum, a category or a type of vehicle, e.g., a following vehicle, is identified from information of a known vehicle in the automobile database that a model of the computing device has been trained to recognize. Dependent claims 2-17 and 19-25 are rejected for inheriting and not curing the deficiencies of claims 1 and 18 respectively. Claims 2, 7, 10, 14, 20 and 23 recite, in part, “algorithms configured for machine learning (ML) and artificial intelligence (Al)” (emphasis added). Relatedly, claims 3, 8, 11, 16, 19, 20 and 23 recite, in part, “said ML/AI algorithms” (emphasis added). It is unclear what “configured for” means. Applicant provides no specific examples of ML algorithms or description of how a deep learning (DL) or ML-based algorithm is “configured” or designed or trained to create an “AI” system or what separates an “AI” algorithm from an “ML” algorithm since ML is admittedly under the “umbrella” of “AI”. See specification at page 11. Claim limitations that precede “said ML/AI algorithms” recite “algorithms configured for machine learning (ML) and artificial intelligence (Al)” and not “ML/AI algorithms” or the like. It is unclear what “/” means in the phrase “ML/AI algorithms”. Since claims 2, 7, 10, 14, 20 and 23 provide the only potential sources of antecedent basis in the pending claims for “said ML/AI algorithms”, those claims are assumed to include the subject matter that was intended as the antecedent basis of “said ML/AI algorithms”. However, “said ML/AI algorithms” is still ambiguous and does not constitute a proper antecedent basis because the claimed subject matter of the assumed antecedent basis (ML and AI algorithms) is not clear and particular, and could refer to algorithms that combine ML and AI concepts, multiple algorithms that include at least one ML algorithm and one AI algorithm, algorithms that are considered to be both ML and AI, or any algorithm that includes a trainable model and is designed to emulate or mimic a human behavior. The “algorithms” in claim 2, for example, are “configured for” ML and AI, not necessarily ML and/or AI algorithms per se as seemingly referenced by “ML/AI” in claim 3. The confusion is compounded by the language “algorithms configured for machine learning (ML) and artificial intelligence (Al)” which is ambiguous. Additionally, the claims and the specification do not clarify what is meant by “configured for” in the context of ML and AI “algorithms”. Algorithms and applications of ML and AI may use other algorithms, e.g., rule-based systems, that are not ML and AI “algorithms” in themselves, but when incorporated as part of an ML or AI-based process, could thereby be interpreted as “configured for” ML and AI, which are “paradigms” as noted by Applicant on page 8 of the specification, and not a discrete list of universally-accepted ML-specific algorithms and AI-specific algorithms, which is not provided in the specification. “ML” and “AI” are fields of computer science or paradigms or design goals, not labels for specific algorithms.5 While some algorithms are generally considered “ML algorithms”, those same algorithms occur in many self-ascribed “AI” systems, which confuses the matter as to whether the same algorithm would be an “ML” or an “AI” model in that case. “Artificial Intelligence” is more of a design goal to develop systems that mimic human thinking or solve human-centric tasks like driving, than an explicit category of algorithms.6 ML is often considered a sub-category of AI, as noted by Applicant on page 11 of the Specification.7 However, even for those of skill in the art, “machine learning” is often used as a synonym for “artificial intelligence”. While some of ordinary skill in the art may consider every ML algorithm to also be an AI algorithm, others would not. Thus, there is too much variability in how these terms can be interpreted that a reasonable interpretation is not readily apparent. Because the specification provides no specific examples of “ML” or “AI” algorithms, let alone an algorithm that is both an “ML” and an “AI” algorithm, or still further, a combination of a specific “ML” algorithm with a specific “AI” algorithm working together to perform aspects of the claimed embodiments, to guide a POSITA to an understanding of the differences between these terms as used in the claims, and only describes and refers to these terms/concepts/paradigms at the same high, abstract level of specificity and generalized description as the claims, the specification does not provide a sufficient description for a POSITA to understand what exactly is intended by a step of “comparing” (claims 2, 7, 14 and 20) or “determining” (claims 10 and 23) that includes multiple “algorithms configured for ... ML ... and ... AI”, and what exactly is intended by training “said ML/AI algorithms” (claims 3, 8, 11, 16, and 19). Training paradigms, e.g., supervised learning, unsupervised learning, and semi-supervised learning (which are not mentioned in the Specification), are not applied interchangeably and in practice, are applied to specific applications of trainable machine-implemented models depending upon application-specific conditions and/or constraints. Accordingly, claims 2, 3, 7, 8, 10, 11, 14, 16, 19, 20, and 23 are indefinite for the above reasons and because the scope of each claim cannot be readily ascertained. Considering the application as a whole, it appears Applicant did not intend to claim any new “ML” or “AI” algorithm per se, but rather intended to refer to the entire collective fields of machine learning and artificial intelligence in a general sense, i.e., all suitable model learning algorithms for training a model to recognize one or more specific patterns within training data. Thus, for purposes of applying prior art, “algorithms configured for machine learning (ML) and artificial intelligence (Al)” is interpreted to mean: algorithms that have changed a computer (e.g., by being executed) so that the computer can (i.e., is capable) be used in a machine learning or artificial intelligence paradigm. Such an algorithm is therefore not required to be explicitly defined or described as a strictly “ML” algorithm or “AI” algorithm due to the Examiner’s inability to distinguish between the two terms as intended by Applicant via the written description of the disclosed embodiments. Dependent claims 4-5, 15, 16 and 21 are rejected for inheriting and not curing the deficiencies of claims 2, 3, 7, 8, 10, 11, 14, 16, 19, 20, and 23. Claim 4 recites, in part, “wherein said step of comparing said generated identifying data includes determining if said generated identifying data from a most recent data check matches said generated identifying data from a predetermined consecutive number of prior data checks and, if so, generating potential stalking vehicle data.” (emphasis added). The claims do not describe any data checks, which makes the meaning of “matches” unclear. For purposes of applying prior art, and based on lines 5-8 on page 10 of the specification, claim 4 is interpreted to mean that a vehicle is labeled or recognized as a potential stalking vehicle if it has been identified repeatedly during a window of time. Dependent claim 5 is rejected for inheriting and not curing the deficiencies of claim 4. Claims 6, 20 and 26 recite, in part, “automobiles that pass a plurality of cameras distributed along streets and roadways” (emphasis added). It is unclear what the difference(s) between “streets” and “roadways” is/are. Per the Claim Interpretation section above, there is no difference between “streets” and “roadways” based on their plain meaning in the context of the claims. For purposes of applying prior art, “automobiles that pass a plurality of cameras distributed along streets and roadways” in claims 6, 20 and 26 means: automobiles that pass a plurality of cameras distributed along a plurality of roads. Dependent claims 7, 8 and 21 are rejected for inheriting and not curing the deficiencies of claims 6 and 20. Claim 17 includes two clauses: “said collected identifying data includes license plate data, camera data, video data, color data, shape data, grill pattern data, bumper data, headlight data, windshield data” and “said automobile identification characteristics include color data, shape data, grill pattern data, bumper data, headlight data, windshield data” (emphasis added). Because each clause ends with “, windshield data”, it is unclear whether the end of each clause should be interpreted as “, or windshield data” or “, and windshield data”. The claim’s scope varies significantly between the two interpretations. Lines 17-21 on page 9 of the specification provide, “It is understood that the imaging assembly 30 may ... represent following automobiles numerically, such as using pixel and pixel combinations indicative of color data, shape data, grill pattern data, bumper data, headlight data, windshields data, and the like”. Since the specification indicates that the subject matter of claim 17 is a list of examples of different types of objects and object appearance, for purposes of applying prior art, claim 17 is interpreted to recite: “The method as in claim 1, wherein: said collected identifying data includes at least one of license plate data, camera data, video data, color data, shape data, grill pattern data, bumper data, headlight data, or windshield data; and said automobile identification characteristics include at least one of color data, shape data, grill pattern data, bumper data, headlight data, or windshield data”. See Superguide Corp. v. Direct TV Enterprises, Inc., 358 F.3d 870, 69 USPQ2d 1865 (Fed. Cir. 2004). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-8, 12-14, 17-21 and 24-26 are rejected under 35 U.S.C. 103 as being unpatentable over Reminding Drivers of the Stalking Vehicles on the Road to Sun et al. (hereinafter “Sun”) in view of U.S. Pat. Appl. Pub. No. 2022/0369066 to Somanath (hereinafter “Somanath”), and in further view of U.S. Pat. Appl. Pub. No. 20210076009 to Choi (hereinafter “Choi”). Regarding claim 1, Sun teaches a method for identifying in real time if a vehicle having access to the Internet is being stalked, comprising: repeatedly receiving in real time (Sun, pg. 2, section II.2, “we use the You Only Look Once (YOLO) algorithm [13], which can provide real-time object detection and identification in videos, images, and live feeds with high speed and accuracy.”) image data from a rear-facing imaging assembly mounted on the vehicle (Sun, pg. 4, section III.A, “We do experiments with the smartphone … since it has camera and IMU sensors. It will be deployed on the rear deck or rear windshield of our vehicle”) that is indicative of at least one following vehicle (Sun, pg. 2, section II.2, “bounding box regression is used to highlight the outline of the detected object in the image.”), said rear-facing imaging assembly including at least one sensor configured to collect identifying data (image data) related to said at least one following vehicle (Sun, pg. 4, section III.A, “smartphone … has camera and IMU sensors.”); comparing said collected identifying data with an automobile identification model (Sun, pg. 2, section II.A.3, “pre-train YOLO model [18] to track and identify the following vehicles”) containing automobile identification characteristics (pg. 2, section II.A.2, “trained on COCO dataset [17]”; COCO includes a vehicle category. As extrinsic evidence that the disclosed pre-trained YOLO model was trained on vehicle images, see Microsoft COCO: Common Objects in Context to Lin et al.) so as to generate identifying data (neural network output) that includes all automobiles that exhibit said collected identifying data (Sun, pg. 4, section III.A.2, “We use a pre-trained deep neural network model proposed in [18] to track and identify the following vehicles.”; The neural network is trained to (ideally) identify all potential following vehicles that are abnormal, i.e., are stalking vehicles, and identify all potential vehicles that are normal. The output of the model is data that identifies the vehicle within the processed image(s) as being a stalking vehicle or not.); and comparing said generated identifying data over a plurality of real time intervals (pg. 5, section IV.A.1, “detected and highlighted with the bounding box accurately over four-time snapshots”; Four points in time of a continuously tracked object includes a plurality of intervals, one between the first and second snapshots, one between the second and third snapshots, and another between the third and fourth snapshots. Thus, the model is applied over a plurality of real time intervals.) so as to determine if said generated identifying is indicative of a stalking vehicle (Following time is calculated per equation (1), which divides the difference between time indices of the frames corresponding to when the following vehicle “first appeared in the rear view of our vehicle” and “the frame the following vehicle has disappeared in the rear view of our vehicle”, by the camera’s frame rate. See Sun at pg. 2, section II.A.3.), but does not teach that which is explicitly taught by Somanath. Somanath teaches comparing collected identifying data (Somanath, par. 21, “the camera 160 is arranged to capture images of vehicles following the vehicle 125 … and … can be a digital camera that captures digital pictures or a video camera that captures video clips or produces streaming video.”; pars. 55-57, “the vehicle-based surveillance system module 730 may utilize the image processing module 735 to process images that are provided to the security computer 150 by the camera 135 … Various image processing techniques may be used such as, for example, an image processing algorithm modeled on a neural network that is trained to analyze images of the vehicle 205 at various times and to determine a pattern of travel of the vehicle 205 and/or a behavioral pattern of the vehicle 205. In some embodiments, reference images stored in the database 740 and/or fetched from device such as the computer 106 of the records agency 105 and/or the computer 117 of the police station 115 may be used by the image processing module 735 for identifying the vehicle 205 and to analyze actions performed by the vehicle 205.”) with an automobile database accessed via the Internet (Somanath, par. 19, “The network 110 may include … public networks such as the Internet.”) containing automobile identification characteristics (Somanath, par. 56, “reference images … fetched from … the computer 106 of the records agency 105 and/or the computer 117 of the police station 115 may be used by the image processing module 735 for identifying the vehicle 205 and to analyze actions performed by the vehicle 205.”) so as to generate identifying data (Somanath, par. 55, “an image processing algorithm modeled on a neural network that is trained to analyze images of the vehicle 205 at various times and to determine a pattern of travel of the vehicle 205 and/or a behavioral pattern of the vehicle 205”). Sun discloses a Privacy-Preserving Defensive Driving system deployed in a vehicle that uses a rear-facing camera mounted on the rear window to continuously detect following vehicles in real time and uses IMU data indicating left/right turns with a neural-network based algorithm that uses the YOLO model for real-time object detection and recognition to identify whether a detected following vehicle is actually a stalking vehicle that is maliciously following the P2D2 vehicle for a predetermined number of turns. Thus, Sun shows that it was known in the art before the effective filing date of the claimed invention to continuously monitor the feed from a rear-facing camera observing the area behind a vehicle to identify stalking vehicles based on algorithms that are configured for ML or AI image processing applications, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, increasing awareness of suspicious or dangerous events surrounding a vehicle. Somanath discloses a vehicle that establishes a geofenced zone around it to detect a stalking vehicle and accesses a number of databases over the Internet to retrieve data and information, including reference images (par. 56), for identifying the stalking vehicle with a neural-network based algorithm for learning the behaviors and patterns of stalking vehicles, where the retrieved reference images are fed to a trained neural network (par. 55) executed on a computer of the vehicle to recognize a travel pattern or behavioral pattern of a following vehicle. Thus, Somanath shows that it was known in the art before the effective filing date of the claimed invention to have Internet access in vehicles with systems to detect stalking vehicles and an onboard computer to leverage the most useful data available to assess whether the following vehicle poses a threat (i.e., is a stalking vehicle), which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, increasing awareness of suspicious or dangerous events surrounding a vehicle. A person of ordinary skill in the art would have been motivated to combine the Internet-based communication between a traffic surveillance system, police station computer, records agency computer, and in-vehicle security computer as disclosed by Somanath with the smartphone, IMU-based turn detection, rear-camera sensor and trainable model disclosed by Sun to thereby re-train the model deployed in the vehicle to download reference vehicle images, re-train the model, and identify stalking vehicles exhibiting characteristics in the downloaded images by determining a following time from collected rear-camera image data and a number of predetermined turns using the CDB turn detection by providing the data to the re-trained model, or alternatively, skip re-training and combine the existing model’s output with identification data derived from the downloaded images. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of identifying stalking vehicles in a greater number and variety of scenarios. Sun in view of Somanath does not teach that which is explicitly taught by Choi. Choi teaches a rear-facing imaging assembly mounted in the vehicle (Choi discloses a vehicle camera system using four cameras, one for facing each of forward, backward, left, and right directions. See Choi at par. 79. A second camera 132 of the four cameras “may be mounted in a rear window glass of the rear of the vehicle 1, may be mounted in the window inside the vehicle 1 to face the exterior of the vehicle 1”. Choi at par. 81). Sun in view of Somanath is analogous to the claimed invention for the same reasons provided above. Choi discloses an autonomous vehicle with four cameras arranged to detect dangerous events around the vehicle, such as traffic accidents, by acquiring images and identifying events in the surrounding area of the vehicle based on the images acquired in forward, backward, left, and right directions, where the rear-facing camera is mounted in the vehicle behind the glass of the rear window to recognize objects located behind the vehicle and obtain the numbers from license plates. Thus, Choi shows that it was known in the art before the effective filing date of the claimed invention to mount a rear-facing camera inside the vehicle to identify and monitor events taking place behind the vehicle, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, increasing awareness of suspicious or dangerous events surrounding a vehicle. A person of ordinary skill in the art would have been motivated to modify the P2D2 vehicle disclosed by Sun in view of Somanath by adding cameras facing forward, left, and right directions with the rear-facing camera being located to a position within the P2D2 vehicle and facing behind the vehicle and also adding the automatic local download of event images and associated contextual data as disclosed by Choi, to thereby identify stalking vehicles behind the vehicle and obtain a richer set of data for future neural network re-training of the P2D2 vehicle’s neural network-based algorithms configured for ML and AI. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of improving the accuracy of the neural network’s output(s) by training it on not only a larger quantity of data, but a more diverse collection of data that is unique to the vehicle and its typical surrounding environments. Regarding claim 2, Sun in view of Somanath and in further view of Choi teaches the method as in claim 1, wherein said step of comparing said collected image data to said automobile database (Somanath, par. 56, “reference images … fetched from … the computer 106 of the records agency 105 and/or the computer 117 of the police station 115 may be used by the image processing module 735 for identifying the vehicle 205 and to analyze actions performed by the vehicle 205.”) includes algorithms configured for machine learning (ML) and artificial intelligence (Al) operable to determine said generated identifying data (YOLO is a deep learning architecture for object detection in real time. Deep learning, in general, is a type of machine learning. However, the goal of using the algorithm is informative as to whether it may also be considered an AI algorithm. In this case, the algorithm(s) is/are designed to mimic the human behavior of checking the rear-view mirror for suspicious or dangerous following vehicles. Thus, the algorithm(s) disclosed by Sun in view of Somanath and in further view of Choi are configured for ML and AI). The rationale for obviousness is the same as provided for claim 1. Regarding claim 3, Sun in view of Somanath and in further view of Choi teaches the method as in claim 2, wherein said ML/AI algorithms (YOLO and LOF are configured for ML and AI because they are useable for ML and AI applications. See Sun at pg. 2, section II.A.2, “To track and identify the following vehicles, we use the You Only Look Once (YOLO) algorithm [13]” and at pg. 3, section II.C, “After we obtain the following vehicle’s following time and our vehicle’s critical driving behavior within the following time, we adopt Local Outlier Factor (LOF) as our anomaly detection algorithm to detect the abnormal following vehicles”) are trained using repeated downloads of a plurality of automobile images (A download is a transfer of data from one location to another. Any model trained to detect or recognize certain vehicles must at some point, download training images, i.e., transfer them into a location where the algorithm can repeatedly reference them during each iteration of training. Transferring a stored training image to a buffer or memory for processing with an ML or AI algorithm is thus downloading. See Somanath at pars. 55-56), each automobile image being represented as a plurality of pixel values associated with a respective automobile (Sun, pg. 2, section II.A.3, “pre-train YOLO model [18] to track and identify the following vehicles”; pg. 2, section II.A.2, “trained on COCO dataset [17]”; COCO includes a vehicle category. As extrinsic evidence that the disclosed pre-trained YOLO model was trained on vehicle images: see Microsoft COCO: Common Objects in Context to Lin et al., e.g., the ”Vehicle” category in FIG. 11, “bus” and “car” in Table 1, and “car” in Table 2). The rationale for obviousness is the same as provided for claim 1. Regarding claim 4, Sun in view of Somanath, and in further view of Choi teaches the method as in claim 3, wherein said step of comparing said generated identifying data includes determining if said generated identifying data from a most recent data check (Sun - The most recent CDB detection is a most recent check or confirmation of the following time as likely describing a stalking vehicle.) matches said generated identifying data from a predetermined consecutive number of prior data checks (Sun - Prior CDB checks within the following time are consecutive prior data checks. The experimental results indicate that “normal following vehicles will not follow us after we make more than three critical driving behaviors (e.g., making left/right turns). However, we can see that some normal following vehicles are still following us after we make four critical driving behaviors. This is because we may share the same driving path with the normal following vehicles.” See section IV.A.3) and, if so, generating potential stalking vehicle data (Abnormal behavior is indicated by 3 or 4 CBD’s within the following time. See Sun at Fig. 19 and section IV.A.3. The number of prior CDB’s is predetermined because Local Outlier Factor (LOF) is used to detect the stalking vehicles and LOF requires an initial step of aligning the following time with CDB’s of the host vehicle. See section II.C. Thus, the aligned data serves as “potential stalking data” because LOF needs to be subsequently applied before an “[abnormal] following vehicle is detected” as shown in FIG. 19. An annotated copy of FIG. 19 of Sun is provided below to aid in understanding the mapping between the prior art and the claimed invention(s).). PNG media_image1.png 952 1014 media_image1.png Greyscale Sun, FIG. 19 (sub-figures re-arranged and annotated). An additional rejection of claim 4 is provided as an alternative ground of rejection in the event the limitation “predetermined” is not considered to be taught by the combination of Sun in view of Somanath and in further view of Choi. Regarding claim 4, under the alternative ground of rejection, Sun in view of Somanath and in further view of Choi teaches the method as in claim 3 and the remaining subject matter (see above), but does not explicitly teach wherein said step of comparing said generated identifying data includes determining if said generated identifying data from a most recent data check matches said generated identifying data from a predetermined consecutive number of prior data checks (emphasis added). Sun further teaches a predetermined consecutive number of prior data checks (See Sun at Fig. 19). Sun in view of Somanath is analogous to the claimed invention for the same reasons provided above. Sun further discloses the P2D2 system as addressing the “scary” scenario of driving while being followed by a stalking vehicle and promoting “the driver’s privacy and safety first” because “it is essential to discriminate between stalking vehicles (i.e., following abnormal vehicles) and normal following vehicles. However, there are no infrastructure-free and ubiquitous in-vehicle systems that can achieve abnormal following vehicle detection while driving.” See Sun at Abstract. Thus, Sun recognizes that before the effective filing date of the claimed invention, there had been a recognized problem or need in the art including a design need (correctly discriminating abnormal from normal following vehicles) and market pressure (no infrastructure-free and ubiquitous in-vehicle solution exists) to solve the problem of drivers’ safety and privacy being put at risk. To confirm that a following vehicle is in fact a stalking vehicle (abnormal following vehicle) and not a normal following vehicle, gyroscope readings during the following time are used to detect critical driving behavior (CBD), i.e., making a left or right turn during the following time. Sun calculates cumulative distribution functions (CDFs) to determine optimal settings or parameters. For example, a CDF of gyroscope readings is calculated to set a threshold for recognizing turns as [0.2, 0.7]. See FIG. 6 and section II.B. The number of CBDs is also evaluated with a CDF where “the 95th percentile number of critical driving behaviors is three. Empirically, this indicates that the normal following vehicles will not follow us after we make more than three critical driving behaviors (e.g., making left/right turns).” See section IV. A.3 and FIG. 14. The CDF shows that the number of CBDs (detected turns) directly affects the reliability of the stalking determination and suggests a threshold of 3 CBDs as a predetermined as a suitable default number of prior data checks. Even though Sun acknowledges false positives, i.e., some normal vehicles following for 4 turns, the number of CBD‘s (turns) is recognized by Sun as affecting the result. Thus, the number of CBD’s is a result-effective variable and Sun discloses a finite number (2 CBD thresholds of 3 and 4) of identified, predictable potential solutions to the recognized need of a system for detecting abnormal following vehicles while driving or problem of drivers’ safety and privacy being put at risk. See Sun at Abstract and section IV.A.3). The CDF of FIG. 14 and the disclosure at section IV.A3 suggest that setting a threshold of 3 CBDs in a deployed version of the P2D2 system would generate accurate stalking determinations and that adjusting the threshold to 4 would reduce false positives with the tradeoff of potentially taking longer and longer on average to make the final stalking determination as the threshold increases. Thus, one of ordinary skill in the art could have pursued the known potential solutions with a reasonable expectation of success. Therefore, it would have been obvious to try to modify to modify the “[abnormal] following vehicle detection” condition block in the workflow of the P2D2 system (See Sun at FIG. 1) of Sun in view of Somanath and in further view of Choi by re-programming the condition to thereby check for a predetermined threshold number of detected turns (CDB’s) since there are a finite number of identified (3 or 4 CBDs), predictable potential solutions to the recognized need or problem, and one of ordinary skill in the art could have pursued the known potential solutions with a reasonable expectation of success. Regarding claim 5, Sun in view of Somanath and in further view of Choi teaches the method as in claim 4, further comprising determining if said generated potential stalking vehicle data includes a single vehicle (A bounding box detects a single vehicle. See Sun at section II.A.2, “bounding box regression”) and, if so, generating final stalking vehicle data (An “anomaly score” indicates whether a detected following vehicle is a stalking vehicle. See Sun at section II.C). Regarding claim 6, Sun in view of Somanath and in further view of Choi teaches the method as in claim 1, further comprising: accessing in real time (Somanath, par. 26, “the security computer 150 may obtain information in real time from the cloud (via the network 110). In one case, the information can include an evaluation and/or analysis of the actions being carried out by the vehicle 205”) a traffic surveillance system configured for monitoring traffic flow (The computer 106 of the records agency 105, the computer of the police station 115, and the network 110 are a traffic surveillance system. See Somanath at FIG. 3. Even if no request for help has been generated in a traffic pattern, “the police officer in the police vehicle may continue to be on alert for any request for help.” See Somanath at par. 41. See also Somanath at par. 29, “the evaluation of the matched VIN number stored in the memory may indicate that the VIN number has been tagged as a hostile entity. If so, the security computer 150 may automatically communicate with the computer 117 in the police station 115 to alert the police officer 116 of a security threat posed by the vehicle 205”), said traffic surveillance system configured to identify automobiles that pass a plurality of cameras distributed along streets and roadways (See FIG. 2 of Somanath. The vehicles travel along roads and each vehicle has a camera that observes other vehicles on the road. Thus, the cameras are distributed along streets and roadways and are configured to detect vehicles that pass through their respective fields of view.); comparing said collected identifying data with said accessed traffic surveillance system so as to generate matching data that includes all automobiles that match said collected identifying data (Somanath, par. 56, “reference images … fetched from … the computer 106 of the records agency 105 and/or the computer 117 of the police station 115 may be used by the image processing module 735 for identifying the vehicle 205 and to analyze actions performed by the vehicle 205.”); and comparing said matching data over a plurality of real time intervals so as to determine if said matching data is indicative of a stalking vehicle (Sun continuously detects following vehicles. See FIG. 9 and section IV. A. Each use of the re-trained YOLO is a comparison and the time between each comparison is an intervals. The detections are performed to enable the subsequent anomaly detection.). The rationale for obviousness is the same as provided for claim 1. Regarding claim 7, Sun in view of Somanath and in further view of Choi teaches the method as in claim 6, wherein said step of comparing said collected identifying data with said accessed traffic surveillance system includes executing algorithms configured for machine learning (ML) (Somanath, par. 37, “the surveillance data may be collected and evaluated on the basis of a machine learning model by the security computer 150 and/or by the computer 117 at the police station 115.”) and artificial intelligence (Al) operable to determine said generated matching data (YOLO and LOF can each be viewed as an ML algorithm or an AI algorithm. See Sun at sections II.A-C. Sun uses YOLO to detect objects (vehicles) from a rear-camera feed. Object detection from visual information, especially in the context of objects appearing in a rear-view behind a vehicle, is a human behavior mimicked by the trained algorithm implemented with a computer. LOF compares a target vehicle’s behavior pattern to patterns of normal behavior (i.e., not stalking) to determine whether or not a vehicle detected by YOLO is a stalking vehicle or a normal following vehicle.). The rationale for obviousness is the same as provided for claim 1. Regarding claim 8, Sun in view of Somanath and in further view of Choi teaches the method as in claim 7, wherein said ML/AI algorithms are trained (YOLO and LOF are configured for ML and AI because they are useable for ML and AI applications. See Sun at pg. 2, section II.A.2, “To track and identify the following vehicles, we use the You Only Look Once (YOLO) algorithm [13]” and at pg. 3, section II.C, “After we obtain the following vehicle’s following time and our vehicle’s critical driving behavior within the following time, we adopt Local Outlier Factor (LOF) as our anomaly detection algorithm to detect the abnormal following vehicles”) using repeated downloads of a plurality of automobile images (See Somanath at par. 56), each automobile image being represented as a plurality of pixel values associated with a respective automobile (Sun, pg. 2, section II.A.3, “pre-train YOLO model [18] to track and identify the following vehicles”; pg. 2, section II.A.2, “trained on COCO dataset [17]”; COCO includes a vehicle category. As extrinsic evidence that the disclosed pre-trained YOLO model was trained on vehicle images: see Microsoft COCO: Common Objects in Context to Lin et al., e.g., the “Vehicle” category in FIG. 11, “bus” and “car” in Table 1, and “car” in Table 2). The rationale for obviousness is the same as provided for claim 1. Regarding claim 12, Sun in view of Somanath and in further view of Choi teaches the method as in claim 1, further comprising: using a gyroscope (Sun, pg. 4, section III.A, “We do experiments with the smartphone (i.e., Motorola Moto E) since it has camera and IMU sensors. It will be deployed on the rear deck or rear windshield of our vehicle … to film the rear view of our vehicle for the following time estimation of the following vehicles and read gyroscope data streams for our vehicle’s critical driving behavior detection.”; The Motorola Moto E’s IMU includes an accelerometer.), detecting when the vehicle is executing a turn (Sun, pg. 3, section II.B.2, “We will use IMU sensors (i.e., Gyroscope) to sense our vehicle’s critical driving behavior. The gyroscope sensor can indicate the driver’s left/right turn.”); receiving in real time said imaging data immediately prior to said detected turn (The image acquisition is continuous, which means imaging data is received before, during, and after a detected turn. See Sun at section IV.A, “YOLO can continuously detect the following vehicles as shown in Fig. 9, we showcase the following vehicle identification and tracking over the continuous video frames.”); receiving in real time said imaging data immediately after said detected turn (The image acquisition is continuous, which means imaging data is received before, during, and after a detected turn. See Sun at section IV.A, “YOLO can continuously detect the following vehicles as shown in Fig. 9, we showcase the following vehicle identification and tracking over the continuous video frames.”); comparing in real time said immediately prior imaging data with said immediately after imaging data and generating comparison data (A following vehicle detected in an image is data for comparison. The vehicle can make a number of turns, where the more turns made while the following vehicle is detected indicates a stronger likelihood of the following vehicle being a stalking vehicle. See Sun at pg. 3, section II.B.1. The following time includes one or more turns, depending on the vehicle’s chosen path. Detecting a turn while detecting the following vehicle from images acquired before and after the turn is a comparison of the images preceding and following the turn because the same detected vehicle being detected in those images indicates a positive instance of a stalking vehicle.); and if said comparison data is identical (Sun - the same vehicle is detected before and after a turn), generating potential stalking vehicle data (i.e., for one of a plurality of turns during the following time), but does not explicitly teach that which is explicitly further taught by Choi. Choi further teaches using an accelerometer (Choi, par. 47, “the detector 120 may be configured to detect driving information of the vehicle 1. The detector 120 may further include a speed detector configured to detect a driving speed of the vehicle 1. The speed detector may include a plurality of wheel speed sensors respectively mounted on a plurality of wheels of the vehicle 1, and may include an acceleration sensor configured to detect an acceleration of the vehicle 1.”). Sun in view of Somanath and in further view of Choi is analogous to the claimed invention for the same reasons provided above. Choi further discloses a detector including an accelerometer that detects speed and acceleration of a moving vehicle. Thus, Choi shows that it was known in the art before the effective filing date of the claimed invention to use accelerometers to use an accelerometer to accurately determine a vehicle’s position and speed, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, monitoring vehicle trajectories to increase awareness of suspicious or dangerous events surrounding a vehicle. A person of ordinary skill in the art would have been motivated to modify the IMU of the P2D2 vehicle disclosed by Sun in view of Somanath and in further view of Choi by adding an accelerometer as further disclosed by Choi, to thereby determine the vehicle’s speed and acceleration at any given point in time. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of improving the accuracy of localizing the vehicle as it travels along streets and roadways and is being followed by a potential stalking vehicle and improving the stalking classification accuracy by detecting when potential stalking vehicles suddenly accelerate to catch up to a vehicle they are following. Regarding claim 13, Sun in view of Somanath, and in further view of Choi teaches the method as in claim 12, further comprising repeatedly determining if said comparison data is identical after a predetermined number of turns are detected and, if so, generating final stalking vehicle data (Sun - After detection of the first turn (1 is a predetermined number of turns) the label/output is the same, i.e., a binary indication that a turn was detected.). Regarding claim 14, Sun in view of Somanath and in further view of Choi teaches the method as in claim 12, wherein said step of comparing in real time said immediately prior imaging data with said immediately after imaging data and generating comparison data includes algorithms configured for machine learning (ML) and artificial intelligence (Al) operable to determine said generated matching data (YOLO and LOF are configured for ML and AI because they are useable for ML and AI applications. See Sun at pg. 2, section II.A.2, “To track and identify the following vehicles, we use the You Only Look Once (YOLO) algorithm [13]” and at pg. 3, section II.C, “After we obtain the following vehicle’s following time and our vehicle’s critical driving behavior within the following time, we adopt Local Outlier Factor (LOF) as our anomaly detection algorithm to detect the abnormal following vehicles”). Regarding claim 17, Sun in view of Somanath and in further view of Choi teaches the method as in claim 1, wherein: said collected identifying data includes camera data (Sun, pg. 4, section III.A.2, “video data processing with YOLO”; A video camera generates camera data.), video data (Sun, pg. 4, section III.A.2, “video data processing with YOLO”), and shape data (Sun - Neural networks trained to recognize objects in images are trained to detect shape patterns.); and said automobile identification characteristics include shape data (Reference images of cars depict the various shapes that comprise the cars. See Somanath at par. 56, “reference images … fetched from … the computer 106 of the records agency 105 and/or the computer 117 of the police station 115 may be used by the image processing module 735 for identifying the vehicle 205 and to analyze actions performed by the vehicle 205.”). The rationale for obviousness is the same as provided for claim 1. Claims 18-21, 24 and 25 substantially correspond to claims 1-3, 6, 12 and 13 reciting a system for identifying in real time if a vehicle having access to the Internet is being stalked, comprising: a computing device (Sun, pg. 4, “Intel Core i7 CPU”); a memory device (Sun, pg. 4, “OptiPlex 7050 Dell desktop running Ubuntu 16.04 OS”) in data communication with said computing device and that includes structures for storing programming and data (Sun, pg. 4, “Intel Core i7 CPU”); an imaging assembly (Sun, pg. 2, “we can extract the following time of all the following vehicles with a camera”) in data communication with said computing device and mounted in (See Choi) a rear-facing position adjacent a rear windshield of the vehicle (Sun, pg. 2, section II.A.1, “we use the camera to monitor the following vehicles at the rear view of our vehicle.”), said imaging assembly including at least one sensor (camera sensor) configured to repeatedly collect in real time image data that is indicative of at least one following vehicle; wherein said computing device, executing said programming, is operable to perform the steps of claims 1-3, 6, 12 and 13. Claim 19 corresponds to the combination of claims 2 and 3. Claim 20 corresponds to the first two clauses of claim 6 and claim 21 corresponds to the last clause of claim 6. The rationale(s) for obviousness is/are the same as provided for claims 1-3, 6, 12 and 13. Claim 26 substantially corresponds to claims 1, 6 and 12 by reciting a method for identifying in real time if a vehicle having access to the Internet is being stalked, comprising the steps of claims 1, 6 and 12. The rationale(s) for obviousness is/are the same as provided for claims 1, 6 and 12. Claims 9, 10 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Sun in view of Somanath, in view of Choi, and in further view of U.S. Pat. Appl. Pub. No. 20190281414 to Schlesinger et al. (hereinafter “Schlesinger”). Regarding claim 9, Sun in view of Somanath, and in further view of Choi teaches the method as in claim 1, but does not teach that which is explicitly taught by Schlesinger. Schlesinger teaches accessing in real time a travel routing system (Schlesinger, par. 19, “The UEA application server 102 also interacts with a mapping server 108 that provides mapping information, such as city maps, locations of restaurants, locations of various users.”) configured to map common routes for reaching a selected destination, said travel routing system including (1) receiving a route selected by a vehicle driver (Schlesinger, par. 27, “using a navigation application on her mobile device 132”) and (2) determining a likelihood (Schlesinger, par. 42, “An operation 406 evaluates the locations where the user may encounter an undesirable contact in view of these parameters to determine the potential encounters and their probabilities.”) that two vehicles would select a same route to said selected destination (Schlesinger, par. 20, “the map segment 130 may be disclosing location of Alice, who is traveling using the vehicle 132 to the Coffee Palace 136. The UEA system 100 may determine that Bart is at the Coffee Palace 136 or is going to be at the Coffee Palace 136 in the near future, based on data collected from Bart's mobile phone 134. In such a case, the UEA system 100 may find an alternate Café, such as the Donut Palace 136 a and recommend Alice to go to the Donut Palace 136 a instead of the Coffee Palace 136. If Alice approves such alternative destination, the UEA system 100 may suggest a route 138 to Alice to go to the Donut Palace 136 a.”). Sun in view of Somanath and in further view of Choi is analogous to the claimed invention for the same reasons provided above. Schlesinger discloses a travel routing system that determines a vehicle is predicted to have the same destination as an undesirable user and suggests a new route for the vehicle to avoid the user. Thus, Schlesinger shows that it was known in the art before the effective filing date of the claimed invention to map common routes to avoid one person from intersecting another that they do not wish to encounter, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, increasing awareness of suspicious or dangerous events surrounding a vehicle. A person of ordinary skill in the art would have been motivated to modify the smartphone disclosed by Sun in view of Somanath and in further view of Choi (See Sun at Section III.A.1) by adding as a feature to the driver’s smartphone the common-path prediction method disclosed by Schlesinger, to thereby notify a driver via their smartphone or the vehicle’s internal display that a suspicious person and/or vehicle is likely to intersect them along their current path of travel and offer the driver an alternate route to avoid that intersection, while Sun’s YOLO and LOF algorithms continuously operate. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of avoiding a stalking vehicle before it gets close enough to be seen by the rear-view camera, thereby giving the driver more time to avoid a dangerous outcome. Regarding claim 10, Sun in view of Somanath, in view of Choi and in further view of Schlesinger teaches the method as in claim 9, wherein said step of determining said likelihood includes algorithms configured for machine learning (ML) and artificial intelligence (Al) operable to determine said likelihood that two vehicles would select a same route to said selected destination (Sun’s YOLO and LOF algorithms, which are “configured for” ML and AI, operate in tandem while Schlesinger’s common-path prediction feature is executing on the driver’s smartphone.). The rationale for obviousness is the same as provided for claim 9. Claim 22 substantially corresponds to claim 9 by reciting a system operable to perform the steps of claim 9. The rationale for obviousness is the same as provided for claim 9. Claims 11 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Sun in view of Somanath, in view of Choi, in view of Schlesinger and in further view of A Context- and Trajectory-Based Destination Prediction of Public Transportation Users to Bieler et al. (hereinafter “Bieler”). Regarding claim 11, Sun in view of Somanath, in further view of Choi, and in further view of Schlesinger teaches the method as in claim 10, but does not teach that which is explicitly taught by Bieler. Bieler teaches wherein said ML/AI algorithms are trained using thousands of examples (Bieler, pg. 305, Data Analysis, “a data set of 3,002 users that use public transportation frequently.”) of which mapped route was selected by a user traveling to a predetermined destination (Bieler, pg. 305, Data Preprocessing and Feature Engineering, “Once a user arrives at a location and stays for more than the 15-min parameter, we consider this location to be the destination of the journey.”; pg. 305, Supervised Learning, “We applied a supervised learning technique to predict the destination of a user based on contextual features of past trips.”). Sun in view of Somanath, in view of Choi, and in view of Schlesinger is analogous to the claimed invention for the same reasons provided above. Bieler discloses training a multiclass random forest classifier to predict a route of travel based on thousands of examples of user data. Thus, Bieler shows that it was known in the art before the effective filing date of the claimed invention to use thousands of examples to train machine learning systems, and by extension AI systems because ML algorithms are useable in AI systems, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, increasing awareness of suspicious or dangerous events surrounding a vehicle. A person of ordinary skill in the art would have been motivated to modify the system disclosed by Sun in view of Somanath, in view of Choi, and in further view of Schlesinger by training the pipeline of YOLO (vehicle detection) and LOF (anomaly detection) with thousands of examples as disclosed by Schlesinger, to thereby train the system to recognize patterns corresponding to potential stalking vehicles. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of avoiding inaccurate classification decisions caused by having too few training samples. Claim 23 substantially corresponds to claims 10 and 11 by reciting a system operable to perform the combined steps of claims 10 and 11. The rationale for obviousness is the same as provided for claim 11. Claim 23 differs from claim 11 by “using thousands of examples of two vehicles traveling to a common destination choose the same route to arrive at said common destination” instead of “using thousands of examples of which mapped route was selected by a user traveling to a predetermined destination”. Sun in view of Somanath, in further view of Choi, and in further view of Schlesinger teaches the system of claim 22, but does not teach that which is explicitly taught by Bieler. Bieler further teaches examples of two vehicles traveling to a common destination choosing the same route to arrive at a common destination (Bieler, pg. 312, “real-time information of route progression and stop locations”). Sun in view of Somanath, in view of Choi, in view of Schlesinger is analogous to the claimed invention for the same reasons provided above. Bieler discloses a destination prediction method that incorporates context as shown in Table II, including the mode of transport being bus, train, or city fairy. Because Bieler focuses on public transportation vehicles, the routes are predetermined having known locations of “stop locations” and the context “on predicted destination at travel onset converted into route prediction improves trajectory prediction based on the public transportation vehicle location data obtained from a public registry.” (Bieler at pg. 312). Bieler thus provides a data set for a multitude of publication transportation vehicles including “real-time information of route progression and stop locations” (pg. 312) and suggests that combining contextual information at the onset of a person traveling along public roads in their vehicle with a system that is capable of tracking all such vehicles along their entire routes would improve such a system (See Bieler at pg. 312, Discussion, first paragraph). Thus, Bieler shows that it was known in the art before the effective filing date of the claimed invention to combine a trainable vehicle trajectory prediction model with a large-scale vehicle monitoring system and a dataset that relates multiple vehicles travelling along the same routes to contextual information that predicts their common destination, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, increasing awareness of suspicious or dangerous events surrounding a vehicle. A person of ordinary skill in the art would have been motivated to combine the system disclosed by Sun in view of Somanath, in view of Choi, in view of Schlesinger with the contextual data machine learning model disclosed by Bieler and the suggested combination with public vehicle transportation data made by Bieler, to thereby train the models of the system to differentiate public transportation vehicles from other vehicles based on a likelihood that a following vehicle is a public transportation vehicle and not a stalking vehicle, and vice versa, based on early contextual information including the number of stops or points in common shared by two vehicles and a predicted destination. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of reducing false positive stalking vehicle detections caused by detecting a following vehicle for a number predetermined turns, e.g., 4, in the event the system is unable to recognize the vehicle type and weights its decision more heavily on the IMU data and the number of detected turns. Claims 15 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Sun in view of Somanath, in view of Choi and in further view of Traffic Flow Prediction Model Using Google Map and LSTM Deep Learning to Azad et al. (hereinafter “Azad”). Regarding claim 15, Sun in view of Somanath and in view of Choi teaches the method as in claim 14, but does not teach that which is explicitly taught by Azad. Azad teaches accessing in real time a traffic flow records system having a plurality of records indicative of traffic volume on respective roadways, times, and dates (Azad, section II.C, “Stacked LSTM model predicted traffic speed employed to time-dependent correlation for forecasting the traffic flow for a road section. In this context, traffic speed and traffic flow data are collected for several road sections from Google Maps and actual or field level at the five-minute interval for a day from 8:00 AM to 12 PM and 3:00 PM to 7:00 PM.”; The LSTM is trained according to the flow chart in Fig-4.1). Sun in view of Somanath and in view of Choi is analogous to the claimed invention for the same reasons provided above. Azad discloses training a long short-term memory (LSTM) network model using traffic flow data continuously acquired from Google Maps. Thus, Azad shows that it was known in the art before the effective filing date of the claimed invention to train ML/AI systems using repeated downloads of traffic flow data indicative or specific roads at certain times on specific dates, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, increasing awareness of suspicious or dangerous events surrounding a vehicle. A person of ordinary skill in the art would have been motivated to combine the system disclosed by Sun in view of Somanath and in further view of Choi with the traffic flow prediction method disclosed by Azad to thereby provide the driver with a prediction of traffic flow in a specific area that they plan to drive through and alerting them when a suspicious vehicle is predicted to intersect their path of travel. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of avoiding areas of high congestion, which leaves the driver vulnerable to unsafe outcomes by being stopped or confined to a small area at a low rate of speed and being unable to escape if needed. Regarding claim 16, Sun in view of Somanath, in view of Choi, and in further view of Azad teaches the method as in claim 15, wherein said ML/AI algorithms are trained using repeated downloads of said plurality of traffic flow records (Azad, section III, “the data collection duration was continuous”). The rationale for obviousness is the same as provided for claim 15. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Microsoft COCO: Common Objects in Context to Lin et al. is pertinent because it demonstrates that the COCO dataset used by Sun to pre-train the YOLO algorithm includes a vehicle object category, e.g., “[the] final selection of categories attempts to pick categories with high votes, while keeping the number of categories per super category (animals, vehicles, furniture, etc.) balanced” (section 3.1). See also, e.g., the ”Vehicle” category in FIG. 11, “bus” and “car” in Table 1, and “car” in Table 2. JP2005056068A discloses “a rear monitoring device” and “license plate information registration unit” that “can register in advance license plate information such as vehicles involved in crimes and stalker vehicles” such that “when these vehicles come behind the host vehicle[, the] driver can instantly recognize the presence of those vehicles and can respond quickly” in paragraph 36. Meteorological outliers detection based on artificial intelligence to Xue et al. is provided as an example of an ML algorithm Local Outlier Factor (LOF) that is configured for an AI system, which is pertinent to claim 2. US20230223123A1is pertinent for paragraph 37 disclosing substantially similar subject matter as disclosed in the paragraph at lines 5-16 of page 11 of the specification, as well as the claims and throughout the specification, which indicates that at least some of the disclosed subject matter regarding ML and AI is well-known in the prior art. Anomalous Vehicle Recognition in Smart Urban Traffic Monitoring as an Edge Service to Chen et al. is pertinent to the problem being solved in claim 1 because it discloses a network that connects vehicles and a cloud server, the server for downloading automobile images to confirm if they indicate suspicious vehicles. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN P POTTS whose telephone number is (571)272-6351. The examiner can normally be reached M-F, 9am-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at 571-272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN P POTTS/Examiner, Art Unit 2672 /SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672 1 See https://web.archive.org/web/20220503111328/https://dictionary.cambridge.org/dictionary/english/configured. 2 See https://web.archive.org/web/20240204175417/https://www.dictionary.com/browse/street. 3 See https://web.archive.org/web/20230926003615/https://www.dictionary.com/browse/roadway. 4 See https://web.archive.org/web/20230511210216/https://www.merriam-webster.com/thesaurus/roadway. 5 See https://web.archive.org/web/20231123190527/https://aws.amazon.com/compare/the-difference-between-artificial-intelligence-and-machine-learning/. 6 See https://web.archive.org/web/20230208130354/https://ai.engineering.columbia.edu/ai-vs-machine-learning/. 7 See also 5.
Read full office action

Prosecution Timeline

Feb 05, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591966
METHOD AND APPARATUS FOR ANALYZING BLOOD VESSEL BASED ON MACHINE LEARNING MODEL
2y 5m to grant Granted Mar 31, 2026
Patent 12560734
METHOD AND SYSTEM FOR PROCESSING SEISMIC IMAGES TO OBTAIN A REFERENCE RGT SURFACE OF A GEOLOGICAL FORMATION
2y 5m to grant Granted Feb 24, 2026
Patent 12555259
PRODUCT IDENTIFICATION APPARATUS, PRODUCT IDENTIFICATION METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12548658
Systems and Methods for Scalable Mapping of Brain Dynamics
2y 5m to grant Granted Feb 10, 2026
Patent 12538743
WARPAGE AMOUNT ESTIMATION APPARATUS AND WARPAGE AMOUNT ESTIMATION METHOD
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+36.8%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 235 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month