Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is in response to papers filed on 2/23/2026.
Claims 13, 15, 17, 19, and 21 have been amended.
Claims 1-12, 18, 20, and 22 have been cancelled.
No claims have been added.
Claims 13-17, 19, and 21 are pending.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 13-17, 19, and 21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1:
The claims are directed to a system, thus Claims 13-17, 19, and 21 fall within one of the four statutory categories. See MPEP 2106.03.
Step 2A, Prong 1:
The claimed invention recites an abstract idea according to MPEP §2106.04. The independent claims which recite the following claim limitations as an abstract idea, are underlined below.
Claim 13 recites:
accessing data corresponding to a vehicle of interest alert from an alert provider when the vehicle is enrolled with the alert provider to receive alerts issued by the alert provider;
computing whether the vehicle is within a geofence [designated area] for the vehicle of interest alert;
accessing data from the camera when the vehicle is within the geofence [designated area], the data corresponding to images of one or more other vehicles;
computing, with a machine-learned model, while the vehicle is within the [designated area] geofence, a vehicle of interest match estimate for the one or more other vehicles based at least in part on the data from the camera, the computing of the vehicle of interest match estimate terminating when the vehicle moves outside of the [designated area] geofence;
computing a vehicle of interest identification in response to the vehicle of interest match estimate exceeding a threshold level; and
computing one or both of a travel direction for the vehicle of interest and a speed of the vehicle of interest based at least in part on the [collected] data from the camera; and
transmitting, while the vehicle is being driven, data corresponding to the vehicle of interest identification to a remote computing device [entity] that is located outside the vehicle, the remote computing device [entity] being associated with the alert provider, the vehicle of interest alert comprising one or both of the travel direction for the vehicle of interest and the speed of the vehicle of interest.
The underlined claim limitations as emphasized above, as drafted, recite a process that, under its broadest reasonable interpretation covers the performance of managing personal behavior or relationships or interactions between people in the form of monitoring, tracking, and reporting objects. Other than reciting a computer implementation, nothing in the claim elements precludes the step from encompassing the performance of managing personal behavior or relationships or interactions between people which represents the abstract idea of certain methods of organizing human activity. But for the recitation of generic implementation of computer system components, the claimed invention merely recites a process for receiving alerts, identifying objects based on the alert, determining a probability match of the object, and returning the match to the alerting entity.
Step 2A, Prong 2:
This judicial exception is not integrated into a practical application. In particular, the claims recite additional elements such as:
a system comprising: a vehicle;
a camera located on the vehicle; one or more processors located onboard the vehicle; and
one or more computer-readable media that stores executable instructions.
In particular, the additional elements cited above beyond the abstract idea are recited at a high-level of generality and simply equivalent to a generic recitation and basic functionality that amount to no more than mere instructions to apply the judicial exception using generic computer technology components.
Accordingly, since the specification describes the additional elements in general terms, without describing the particulars, the additional elements may be broadly but reasonably construed as generic computing components being used to perform the judicial exception (see specification at [0022]; [0026]; [0027]). Furthermore, both the machine-learned model and geofence used to perform the recited steps are recited at a high-level of generality and are only nominally and generically recited as a tool for performing these steps. These claimed additional elements merely recite the words “apply it" (or an equivalent) with the judicial exception, or merely include instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f).
Thus, the additional claim elements are not indicative of integration into a practical application, because the claims do not involve improvements to the functioning of a computer, or to any other technology or technical field (MPEP 2106.05(a)), the claims do not apply the abstract idea with, or by use of, a particular machine (MPEP 2106.05(b)), the claims do not effect a transformation or reduction of a particular article to a different state or thing (MPEP 2106.05(c)), and the claims do not apply or use the abstract idea in some other meaningful way beyond generally linking the use of the abstract idea to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception (MPEP 2106.05(e)). Therefore, the claims do not, for example, purport to improve the functioning of a computer. Nor do they effect an improvement in any other technology or technical field. Accordingly, the additional elements do not impose any meaningful limits on practicing the abstract idea and the claims are directed to an abstract idea.
Step 2B:
The claims do not include additional elements, individually or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept at Step 2B. Thus, the claim is not patent eligible.
Dependent Claims:
Claims 14-17, 19, and 21 recite further elements related to the monitoring, tracking, and reporting objects steps of the parent claims. These activities fail to differentiate the claims from the related activities in the parent claims and fail to provide any material to render the claimed invention to be significantly more than the identified abstract ideas.
Claim 14 recites “wherein the vehicle of interest alert comprises data corresponding to one or more of a make, a model, a color, and a license plate number for the vehicle of interest” which narrows how the abstract idea may be performed but does not make the claim any less abstract.
Claim 15 recites “wherein the camera comprises one or more of an advanced driver assistance system camera, a backup camera, and a sideview camera” which narrows how the abstract idea may be performed but does not make the claim any less abstract. Additionally, the cameras are recited at a high-level of generality and is only nominally and generically recited as a tool for performing these steps.
Claim 16 recites “wherein the vehicle of interest match corresponds to a likelihood calculated by the machine-learned model that the one or more other vehicles matches the vehicle of interest” which narrows how the abstract idea may be performed but does not make the claim any less abstract.
Claim 17 recites “wherein the vehicle of interest identification comprises two or more of: an updated location of the vehicle of interest; the travel direction for the vehicle of interest; the speed of the vehicle of interest; and an image of the vehicle of interest” which narrows how the abstract idea may be performed but does not make the claim any less abstract.
Claim 19 recites “accessing, with the computing device, data corresponding to an updated vehicle of interest alert; computing, with the computing device, whether the vehicle is within an updated geofence for the updated vehicle of interest alert; and continuing to access the data from the camera when the vehicle is within the updated geofence” which narrows how the abstract idea may be performed but does not make the claim any less abstract. Additionally, the camera and geofence (including updated geofence) is recited at a high-level of generality and is only nominally and generically recited as a tool for performing these steps.
Claim 21 recites “wherein the vehicle is a passenger vehicle separate of the emergency services agency” which narrows how the abstract idea may be performed but does not make the claim any less abstract.
The claims do not provide any new additional limitations or meaningful limits beyond abstract idea that are not addressed above in the independent claims therefore, they do not integrate the abstract idea into a practical application nor do they provide significantly more to the abstract idea. Thus, after considering all claim elements, both individually and as a whole, it has been determined that the claims do not integrate the judicial exception into a practical application or provide an inventive concept. Therefore, Claims 14-17, 19, and 21 are ineligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 13-17, 19, and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wiles (Pub. No. US 2019/0051142 A1) in view of Evanitsky et al. (Patent No. US 9,530,310 B2) in further view of Robinson et al. (Patent No. US 10,292,036 B1).
In regards to Claim 13, Wiles discloses:
A method/system for identifying a vehicle of interest, comprising:
a vehicle; a camera located on the vehicle; one or more processors located onboard the vehicle; and one or more non-transitory computer-readable media that store instructions that are executable by the one or more processors to perform operations; (at least Abstract; [0011]; [0020]; [0030])
accessing data corresponding to a vehicle of interest alert from an alert provider; ([0030], vehicles receive alerts from law enforcement requesting assistance in locating vehicles of interest)
accessing data from the camera, the data corresponding to images of one or more other vehicles; ([0011]; [0029]; [0061], camera data (sensor) is used to collect images/data regarding vehicles)
computing, with a machine-learned model, a vehicle of interest match estimate for the one or more other vehicles based at least in part on the data from the camera; ([0037]; [0043], an analysis unit is used to determine if any vehicles match the vehicle of interest, the analysis unit uses weighting algorithms that weight various factors (models) to generate scores and confidence levels for each analyzed vehicle (model), as this is performed by an “analysis unit” and/or other components of the system,, there is no indication that it is not (or cannot be) performed automatically; [0041]-[0044], data identifying a vehicle is extracted from the alert and compared to image data to determine matches; see at least [0036]-[0045] for a detailed description of this process)]
computing a vehicle of interest identification in response to the vehicle of interest match estimate exceeding a threshold level; ([0043]-[0045], scores and confidence levels are generated by the weighting model and a report is sent to the law enforcement entity if the results are above a threshold (“…the threshold level above which a report is made, and the threshold level above which the score or confidence level is considered overwhelmingly high….” ))
transmitting, while the vehicle is being driven, data corresponding to the vehicle of interest identification to a remote computing device that is located outside the vehicle, the remote computing device being associated with the alert provider, the vehicle of interest alert comprising one or both of the travel direction for the vehicle of interest and the speed of the vehicle of interest ([0025]; [0026], upon determination of a vehicle of interest, information is transmitted to a remote system, such as law enforcement (alert provider); [0023]; [0029]; [0030], vehicles receive alerts from law enforcement (alert provider) requesting assistance in locating vehicles of interest, “…may be operated to collect the relevant information about the surrounding vehicles 104 to provide assistance to law enforcement…even if the vehicle is still en route…”; [0025], “… report message 115 may include the current location of vehicle 102, date, time, direction of travel etc.”)
Wiles discloses the above system/method for identifying a vehicle of interest including using sensors/cameras on mobile vehicles and using characteristics of a vehicle of interest for assisting units in identifying the vehicle of interest. Wiles does not explicitly disclose, but Evanitsky teaches:
computing one or both of a travel direction for the vehicle of interest and a speed of the vehicle of interest based at least in part on the data from the camera; (column 5, ¶ 2; data collected for identifying a vehicle of interest includes direction; column 2, ¶ 1, vehicle of interest criteria used for determining the search includes initial location, speed, direction (trajectory); column 5, ¶ 2; column 8, ¶ 4, data collected regarding a vehicle of interest and requested data for identifying a vehicle of interest includes the speed; column 2, ¶ 1, vehicle of interest criteria used for determining the search includes initial location, sped, direction (trajectory))
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Wiles so as to have included computing one or both of a travel direction for the vehicle of interest and a speed of the vehicle of interest based at least in part on the data from the camera, as taught by Evanitsky in order to more efficiently identify the area of search for a vehicle by providing additional information regarding where the vehicle may be and increase the speed and likelihood resolution (Evanitsky, column 10, ¶ 3).
Additionally, while Wiles does disclose accessing data from [a] camera and computing a vehicle of interest match estimate for the one or more other vehicles based at least in part on the data from the camera, it not explicitly disclose a specified search area for identifying the vehicle of interest, however, Evanitsky teaches:
computing whether the vehicle is within a geofence for the vehicle of interest alert;
accessing data from [a] camera and computing a vehicle of interest match estimate for the one or more other vehicles based at least in part on the data from the camera when the vehicle is within the geofence; (column 8, ¶ 2; column 9, ¶ 4, a search area or radius (geofence) is determined to select which sensors/cameras are employed for identifying the vehicle of interest, the area/radius of search is dynamic and can be updated based on changes of information regarding the vehicle of interest; column 9, ¶ 2; etc., sensors/cameras can be mounted on vehicles, one of ordinary skill would recognize that the selection of cameras within the search area/radius would include only the mobile/mounted cameras that are within that area (when the vehicle is within the geofence) and not the ones that are outside of that area; column 7 ¶ 2, image data within the field of view is used to estimate matches of vehicles of interest and provide alerts to authorities, as described above, the cameras used are within the geofence ) and
the computing of the vehicle of interest match estimate terminating when the vehicle moves outside of the geofence (column 8, ¶ 2; column 9, ¶ 4, a search area or radius (geofence) is determined to select which sensors/cameras are employed for identifying the vehicle of interest, the area/radius of search is dynamic and can be updated based on changes of information regarding the vehicle of interest; column 9, ¶ 2; etc., sensors/cameras can be mounted on vehicles, one of ordinary skill would recognize that the selection of cameras within the search area/radius would include only the mobile/mounted cameras that are within that area (when the vehicle is within the geofence) and not the ones that are outside of that area).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Wiles so as to have included computing whether the vehicle is within a geofence for the vehicle of interest alert; accessing data from a sensor of the vehicle when the vehicle is within the geofence; and the computing of the vehicle of interest match estimate terminating when the vehicle moves outside of the geofence, as taught by Evanitsky in order to ensure efficiency in the system by ensuring that only useful and necessary sensors are being employed (Evanitsky, column 2, ¶ 1, “…the number of cameras involved can be kept at a minimum…”).
Wiles discloses accessing data corresponding to an alert from an alert provider via a device (associated with a vehicle), as demonstrated above. Wiles/Evanitsky does not explicitly disclose the device is enrolled to receive alerts, however, Robinson teaches:
the [device] is enrolled with the alert provider to receive alerts issued by the alert provider (column 16, ¶ 2, (“For example, the network-connectable devices 105 subscribe to receive reverse emergency messages relating to one or more specific types of incident (for example, missing child alerts, active shooter alerts, and the like)…”, “reverse emergency message” describes a message sent from a public safety organization to devices in the community)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Wiles/Evanitsky so as to have included the [device] is enrolled with the alert provider to receive alerts issued by the alert provider, as taught by Robinson in order to ensure that alerts/notifications are provided to those entities that requested them (Robinson, at least column 16, ¶ 2). One of ordinary skill in the art would recognize how to apply the device enrollment for notifications/alerts to the processors located onboard the vehicles of Wiles/Evanitsky.
In regards to Claim 14, Wiles discloses:
wherein the vehicle of interest alert comprises data corresponding to one or more of a make, a model, a color, and a license plate number for the vehicle of interest ([0012]).
In regards to Claim 15, Wiles discloses:
wherein the camera comprises one or more of an advanced driver assistance system camera, a backup camera, and a sideview camera ([0029]; [0061], shows the sensors are cameras).
As far as the particular non-functional descriptive material to describe/label/name the cameras, it has been deemed non-functional descriptive material and therefore accorded no patentable weight. The particular type(s) of cameras used does not significantly affect the processing or functioning of the claimed invention.
In regards to Claim 16, Wiles discloses:
wherein the vehicle of interest match corresponds to a likelihood calculated by the machine-learned model that the one or more other vehicles matches the vehicle of interest ([0041]-[0045], shows the process for comparing vehicles and calculating a likelihood (score, confidence, threshold) that a one or more vehicles matches the vehicle of interest).
In regards to Claim 17, Wiles discloses the above system/method for using characteristics of a vehicle of interest for assisting units in identifying the vehicle of interest. Wiles does not explicitly disclose, but Evanitsky teaches:
wherein the vehicle of interest identification comprises two or more of:
an updated location of the vehicle of interest; (column 2, ¶ 1, vehicle of interest criteria used for determining the search includes initial location, sped, direction (trajectory); column 1, SUMMARY, ¶ 4; column 2, ¶ 5, locations of the vehicle can be reported in real-time, indicating the location is updated as the vehicle moves)
the travel direction for the vehicle of interest; (column 5, ¶ 2; data collected for identifying a vehicle of interest includes direction; column 2, ¶ 1, vehicle of interest criteria used for determining the search includes initial location, sped, direction (trajectory))
the speed of the vehicle of interest; (column 5, ¶ 2; column 8, ¶ 4, data collected regarding a vehicle of interest and requested data for identifying a vehicle of interest includes the speed; column 2, ¶ 1, vehicle of interest criteria used for determining the search includes initial location, speed, direction (trajectory)) and
an image of the vehicle of interest
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Wiles so as to have included wherein the vehicle of interest identification comprises one or more of: an updated location of the vehicle of interest; a travel direction for the vehicle of interest; a speed of the vehicle of interest; and an image of the vehicle of interest], as taught by Evanitsky in order to more efficiently identify the area of search for a vehicle by providing additional information regarding where the vehicle may be and increase the speed and likelihood resolution (Evanitsky, column 10, ¶ 3).
In regards to Claim 19, Wiles discloses the above system/method for identifying a vehicle of interest including using sensors/cameras on mobile vehicles. Wiles does not explicitly disclose a specified search area for identifying the vehicle of interest, however, Evanitsky teaches:
accessing data corresponding to an updated vehicle of interest alert;
computing whether the vehicle is within an updated geofence for the updated vehicle of interest alert; and
continuing to access the data from the camera when the vehicle is within the updated geofence (column 8, ¶ 2-4; column 9, ¶ 4; column 10, ¶ 2-3, after an initial search area/radius is determined, it can be dynamically updated based on received criteria (such as speed, path, trajectory, sightings, etc.), sensors/cameras can be “dynamically instructed” and updated data can be used to “to alert another image-capturing unit for tracking”, indicating that different sensors/cameras can be used based on thew relationship between the search area/radius and locations of the sensors/cameras, updated search area/radius criteria represents an “updated alert of interest” that is communicated to the sensors/cameras; column 9, ¶ 2; etc., sensors/cameras can be mounted on vehicles, one of ordinary skill would recognize that the selection of cameras within the search area/radius would include only the mobile/mounted cameras that are within that area (when the vehicle is within the geofence) and not the ones that are outside of that area)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Wiles so as to have included accessing, with the computing device, data corresponding to an updated vehicle of interest alert; computing, with the computing device, whether the vehicle is within an updated geofence for the updated vehicle of interest alert; and continuing to access the data from the sensor when the vehicle is within the updated geofence, as taught by Evanitsky in order to ensure efficiency in the system by ensuring that only useful and necessary sensors are being employed (Evanitsky, column 2, ¶ 1, “…the number of cameras involved can be kept at a minimum…”).
In regards to Claim 21, Wiles discloses:
wherein the vehicle is a passenger vehicle separate of the emergency services agency (Figure 1; [0025], the vehicle (102) used to monitor surrounding vehicles can include passengers; [0020]; Claim 1, the vehicle used to monitor surrounding vehicles is a non-law enforcement vehicle (it is noted that throughout the reference the vehicle used to monitor surrounding vehicles, identified as drawing element “102”, is also identified as a “host vehicle”, for example, see Abstract))
Additional Prior Art Identified but not Relied Upon
Al Abed (Pub. No. US 2022/0044550 A1). Discloses wherein an analyzing module dynamically analyzes sensor data variables to identify a match between the first vehicle and the vehicle of interest and notifies the server based on the SDV exceeding a threshold. The DCM notifies the server of the location of the vehicle of interest. (see at least Abstract).
Dai et al. (CN 109034171 B). Discloses wherein a machine-learned model has been trained using a training dataset determined using information describing previous vehicles, the training dataset comprising one or more positive samples, each positive sample representing a vehicle identification characteristic (see at least page 2, lines 22-32; page 3, lines 13-23; Claim 1; Claim 3; Claim 5).
Gu et al. (CN 108492314 A). Discloses wherein a machine-learned model for tracking vehicles is trained using a training dataset determined using information describing previous vehicles, the training dataset comprising one or more positive samples, each positive sample representing a vehicle identification characteristic (see at least Abstract).
Massey (Pub. No. US 2017/0330460 A1). Discloses subscribing to receive alerts of vehicles of interest and comparing images vehicles of interest. (see at least [0051]).
Tan et al. (CN 116645637 A). Discloses the starting of a camera when a vehicle enters a specific predefined area. (see at least page 3, lines 1-2; page 6, lines 26-17; Claim 3; Claim 9).
Response to Arguments
Applicant’s arguments filed 2/23/2026 have been fully considered but they are not persuasive.
I. Rejection of Claims under 35 U.S.C. §101:
Applicant’s remarks regarding Step 2B, additional features, and Example 36, Applicant fails to provide evidence to demonstrate how/why the features/elements of the claims are comparable to the findings of the example. Applicant summarizes the example, then summarizes the claim, but fails to provide any analysis, comparison, etc. to support the assertions that they are eligible for the same reasons. The mere fact that they both use cameras, memories with instructions, and compute data does not necessarily mean they provide the same “significantly more” material. For example, Applicant does not explain how/why computing direction/speed while within a geofence and using a machine-learned model to estimate a match for a vehicle from camera collected data is comparable to reconstructing 3-D coordinates of an inventory item using a processor in combination with a high-resolution video array. Nor does Applicant explain how/why the computing in Applicant’s claimed invention would achieve the “significantly more” in a manner comparable to the computing in Example 36.
Applicant does not provide any explanation or evidence to demonstrate that computing, with a machine-learned model, a vehicle of interest match estimate for the one or more other vehicles based at least in part on the data from the camera the computing of the vehicle of interest match estimate terminating when the vehicle moves outside of the geofence is not well understood, routine, conventional activity. It is also noted that well understood, routine, conventional activity is not relied upon as part of the claim rejections at this time. This issue was also addressed in the previous office actin, provided here for reference:
Applicant asserts that “Claim 1 has a specific limitation or combination of limitations that are not well- understood, routine, conventional activity in the field which reflect such solutions.”, however, Applicant fails to provide any evidence, support, arguments, etc. to demonstrate how/why any specific limitation or combination of limitations would not be well- understood, routine, conventional (WURC). Applicant merely recites claim language and fails to identify the alleged limitation or combination of limitations that would be WURC and/or explain why they would be WURC or significantly more than the abstract ideas.
Applicant asserts that the claimed invention addresses problems specifically arising in the realm of vehicle of interest searches. The specification, including ¶ 5 and 6 as cited by Applicant, fail to provide the level of evidence, background, etc. to demonstrate that the alleged problem existed, why prior systems could/would not address it, how those alleged deficiencies are addressed in a meaningful manner, etc. Applicant’s specification and remarks merely makes assertions regarding problems in the art, provided solutions, and desired benefits without the necessary background, evidence, and/or support. Please see MPEP 2106.05(a), Improvements to the Functioning of a Computer or To Any Other Technology or Technical Field. Remarks from the previous office action related to similar evidence issues are provided here for reference:
Applicant argues that the claim integrates into a practical application and provides an improvement (“improves reliability of vehicle of interest searches by automatically identifying vehicles of interest based at least in part on images taken by vehicle sensors of other vehicles”). Applicant’s citations to the specification do not provide the level of evidence required to demonstrate the alleged improvement. Applicant makes assertions regarding false positive sightings, low reliability, the use of enrollment and geofence, etc., but does not provide sufficient evidence or background to support these assertions. For example, Applicant does not provide any explanation or background to demonstrate how the alleged practical application is achieved in a meaningful manner, why prior systems could/would not provide these alleged improvements (use of enrollment, geofence, transmitting alerts, etc.), how/why prior systems were deficient, how these deficiencies are addressed in a meaningful manner, etc. Applicant merely asserts that there is a problem and then asserts that the cited material addresses it. Applicant then provides assertions to benefits, such as easier alerts, more relevant data, etc., however, as above, Applicant does not provide sufficient support to demonstrate that one of ordinary skill in the art would recognize the alleged improvements.
MPEP 2106.05(a), Improvements to the Functioning of a Computer or To Any Other Technology or Technical Field (“If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology.”).
II. Rejection of Claims under 35 U.S.C. §102 and 35 U.S.C. §103:
Applicant argues that Evanitsky uses a centralized system that receives data from multiple cameras and Wiles only uses one vehicle.
First, Wiles is not limited to one vehicle and discloses multiple vehicles ([0011]; [0027]; [0052]; etc.). Additionally, although the examples provided in Wiles may often refer to one vehicle monitoring its surroundings, nowhere in Wiles does it exclude the ability to have multiple vehicles (with their associated cameras) doing the monitoring and one of ordinary skill in the art, based on the disclosure, would not assume that Wiles intended the use of their system/method to be limited to only a single vehicle for each client.
Second, Wiles also sends alerts to a central entity (the law enforcement entity that sends out requests, collects data, etc.), so it is unclear how the number of vehicles used would preclude it from interacting with a central system, such as that disclosed in Evanitsky.
Third, it is unclear whether Applicant is arguing that the references could not be combined because the is no reason to modify or because of impermissible hindsight. Applicant asserts that references cannot be combined because one uses a central system and the other uses one vehicle, however, Applicant provides no additional arguments, evidence, etc. to explain why they cannot be combined. For example, why the vehicle (that sends alerts/data to a central entity) could/would not be combined with a multi-camera system, how/why the methods/systems of the references are significantly different, why one of ordinary skill would not be motivated to combine them, why the references do not demonstrate and/or one of ordinary skill would not have the skill and knowledge to combine elements from systems that perform similar tasks and functions, etc.
Applicant’s remaining remarks, drawn to the newly provided claim material, are moot in view of the newly provided prior art rejections, citations, and/or explanations, provided above. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAUN D SENSENIG whose telephone number is (571)270-5393. The examiner can normally be reached M-F: 10:00am-4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynda Jasmin can be reached at 571-272-6872. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.D.S/Examiner, Art Unit 3629 March 10, 2026
/SARAH M MONFELDT/Supervisory Patent Examiner, Art Unit 3629