Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 5, 2026 has been entered.
Information Disclosure Statement
The information disclosure statement filed February 12, 2026 fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. It has been placed in the application file, but the information referred to therein has not been considered. Specifically, NPL reference 1 has not been provided. The document provided is dated April 25, 2024, while listed NPL reference 1 is April 16, 2024.
The information disclosure statement filed February 12, 2026 fails to comply with 37 CFR 1.98(a)(1), which requires the following: (1) a list of all patents, publications, applications, or other information submitted for consideration by the Office; (2) U.S. patents and U.S. patent application publications listed in a section separately from citations of other documents; (3) the application number of the application in which the information disclosure statement is being submitted on each page of the list; (4) a column that provides a blank space next to each document to be considered, for the examiner’s initials; and (5) a heading that clearly indicates that the list is an information disclosure statement. The information disclosure statement has been placed in the application file, but the information referred to therein has not been considered. Specifically, an NPL reference has not been listed. The document provided is dated April 25, 2024, while listed NPL reference 1 is April 16, 2024.
Status of Claims
Claims 1, 9, 17 have been amended.
Claim 6 have been cancelled.
No claims have been added.
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 – 5, 7 – 13, 15 – 20 are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Horstmann (US PGPub 2023/0334438 A1) in view of Wang 1 (CN 112434368 A) in further view of Wang 2 (CN 110570513 A).
In regards to claims 1, 9, 17, Horstmann discloses (Claim 1) a computing system comprising; (Claim 9) a non-transitory computer readable medium storing instructions that, when executed by one or more processors of a computing system, cause the computing system to aid a user after a vehicle incident while a vehicle of the user is at a scene of the vehicle incident, by performing operations that include; (Claim 17) a machine-learning method for providing aid to a user after a vehicle incident while a vehicle of the user is at a scene of the vehicle incident, the method being performed by one or more processors and comprising:
In regards to:
a network communication interface;
one or more processors; and
a memory storing instructions that, when executed by the one or more processors, cause the computing system to aid a user after a vehicle incident while a vehicle of the user is at a scene of the vehicle incident, by performing operations that include (Fig. 2; ¶ 19, 23, 24, 50, 55 wherein the accident is reported “directly after the damage is incurred” and where a determination of whether the vehicle can be driven is performed, thereby establishing that the user is reporting the incident and taking images at the scene of the incident and facilitates the towing or repair of the vehicle by allowing the user to directly communicate with a service provider
For the purposes of compact prosecution, in the event that the applicant does not agree with the analysis provided above, and alternate, more conservative interpretation, in view of Wang 1 has been provided to explicitly teach that image information after a vehicle incident has occurred can be collected and transmitted while at the scene of the vehicle incident.):
training one or more machine learning model being based on historical claim data associated with claims filed for prior vehicle incidents, the historical claim data including (i) user-indicated damage information identified by individual claimants interacting with a three-dimensional, virtual representation of their respective vehicle after a corresponding vehicle incident, to identify one or more regions of their vehicle that was damaged by a corresponding prior vehicle incident; (ii) image data, based on images captured by individual claimants of their respective vehicles after the respective vehicle is damaged by a corresponding prior vehicle incident; and (iii) payouts of the claims filed for the prior vehicle incidents (¶ 28 wherein the AI system is trained on historical information or other data related to similar damage to allow it to estimate cost, wherein the cost can be broken down by labor hour, part, and other assessments
¶ 27, 32, 55, 57, 58, 61 wherein the system allows for the reporting of a vehicle incident to an insurance company to further facilitate the damage assessment and/or repair process, which, as stated above, the system can later utilize for future assessments that comprise how an incident compares to other similar incidents to determine whether the vehicle can be repaired or is considered total and all costs associated for repairing, totaling, or replacing (i.e. payout) a vehicle
¶ 15, 22, 28, 55, 59 wherein the system determines a real-time damage estimate
¶ 13, 22, 24, 25, 30, 37, 39, 43, 48, 50, 60 wherein the user can utilize their user device to capture and transmit digital images of their vehicle, i.e. virtual representation of their vehicle, to allow the user to indicate the location of where the vehicle was damaged to allow the system to utilize machine learning/artificial intelligence (AI) and image analysis to evaluate the condition the vehicle, which, in turn, allows the system to inform the user whether the vehicle is totaled, can be repaired, determines whether towing will be required, identify repair shops that can perform the repair, and etc.);
receiving incident data corresponding to a vehicle incident involving the vehicle of the user, by providing, on a computing device of the user, a user interface for enabling the user to provide the incident data, the user interface being configured to include (i) a […] virtual representation of the vehicle involved in the vehicle incident, the user being able […] to graphically indicate one or more locations of the vehicle that were damaged by the vehicle incident, and (ii) an interface to receive images, captured by the user, of the vehicle after the vehicle incident (¶ 19, 21, 54 wherein information corresponding to a vehicle incident involving a vehicle of a user is received;
¶ 13, 22, 24, 25, 30, 37, 39, 43, 48, 50, 60 wherein the user can utilize their user device to capture and transmit digital images of their vehicle, i.e. virtual representation of their vehicle, to allow the user to indicate the location of where the vehicle was damaged to allow the system to utilize artificial intelligence (AI) and image analysis to evaluate the condition the vehicle, which, in turn, allows the system to inform the user whether the vehicle is totaled, can be repaired, determines whether towing will be required, identify repair shops that can perform the repair, and etc.);
In regards to:
wherein receiving the incident data includes:
implementing a guided capture process through the user interface provided on the computing device of the user, […],
the guided capture process including (i) processing, in real-time, image input received through the camera of the computing device, (ii) determining, by processing the image input in real-time, when the vehicle […] is aligned with a corresponding portion of the vehicle, (iii) when the vehicle […] is determined to be aligned with the corresponding portion of the vehicle, automatically capturing the image input
(¶ 22, 25, 59, 60 wherein the user’s device guides the user to capture high quality images so that machine learning models can effectively assess the vehicle damage in real-time);
executing the trained machine learning model on the incident data, including the image input received through implementation of the guided capture process, to determine an estimated cost of repairing the vehicle (¶ 28 wherein the AI system is trained on historical information or other data related to similar damage to allow it to estimate cost, wherein the cost can be broken down by labor hour, part, and other assessments
¶ 27, 32, 55, 57, 58, 61 wherein the system allows for the reporting of a vehicle incident to an insurance company to further facilitate the damage assessment and/or repair process, which, as stated above, the system can later utilize for future assessments that comprise how an incident compares to other similar incidents to determine whether the vehicle can be repaired or is considered total and all costs associated for repairing, totaling, or replacing (i.e. payout) a vehicle
¶ 15, 22, 28, 55, 59 wherein the system determines a real-time damage estimate
¶ 13, 22, 24, 25, 30, 37, 39, 43, 48, 50, 60 wherein the user can utilize their user device to capture and transmit digital images of their vehicle, i.e. virtual representation of their vehicle, to allow the user to indicate the location of where the vehicle was damaged to allow the system to utilize machine learning/artificial intelligence (AI) and image analysis to evaluate the condition the vehicle, which, in turn, allows the system to inform the user whether the vehicle is totaled, can be repaired, determines whether towing will be required, identify repair shops that can perform the repair, and etc.);
determining whether the vehicle of the user is repairable or totaled based on the estimated cost of repairing the vehicle (¶ 27, 32 wherein the system determines whether the vehicle is totaled based on whether the cost of repair exceeds a threshold);
automatically generating a list of service providers to facilitate in handling of the damaged vehicle, the list of service providers being specific to the determination of whether the vehicle is repairable or totaled (¶ 27, 32, 50, 54, 55, 65 wherein AI is trained and utilized to analyze the vehicle incident and generate a list of service providers, e.g., repair shops and towing companies, to facilitate handling of the vehicle, which is further based on whether the vehicle is repairable or totaled); and
communicating the list of service providers to a computing device of the user while the user is at the scene of the vehicle incident (¶ 54, 55, 57 wherein the list of service of service providers is provided to the user to review and select.
For the purposes of compact prosecution, in the event that the applicant does not agree with the analysis provided above, and alternate, more conservative interpretation, in view of Wang 1 has been provided to explicitly teach that image information after a vehicle incident has occurred can be collected and transmitted while at the scene of the vehicle incident.).
Horstmann discloses an image evidenced based vehicle incident reporting system and method that allows a user to capture and provide vehicle incident images after a vehicle incident. Although Horstmann discloses images and providing the images after an incident, Horstmann fails to explicitly disclose whether it would have been obvious to capture and provide three-dimensional images, as well as the more conservative interpretation that “directly after the damage is incurred” encompassing “while at the vehicle incident”.
To be more specific, Horstmann fails to explicitly disclose:
a memory storing instructions that, when executed by the one or more processors, cause the computing system to aid a user after a vehicle incident while a vehicle of the user is at a scene of the vehicle incident, by performing operations that include:
receiving incident data corresponding to a vehicle incident involving the vehicle of the user, by providing, on a computing device of the user, a user interface for enabling the user to provide the incident data, the user interface being configured to include (i) a three-dimensional virtual representation of the vehicle involved in the vehicle incident, the user being able to rotate the three-dimensional virtual representation to graphically indicate one or more locations of the vehicle that were damaged by the vehicle incident, and (ii) an interface to receive images, captured by the user, of the vehicle after the vehicle incident;
wherein receiving the incident data includes:
implementing a guided capture process through the user interface provided on the computing device of the user, by generating multiple vehicle outlines to capture images of the vehicle at different angles, each vehicle outline being provided as an overlay or embedded feature of a camera image, and each vehicle outline being configured for a classification and portion of the vehicle,
the guided capture process including (i) processing, in real-time, image input received through the camera of the computing device, (ii) determining, by processing the image input in real-time, when the vehicle outline is aligned with a corresponding portion of the vehicle, (iii) when the vehicle outline is determined to be aligned with the corresponding portion of the vehicle, automatically capturing the image input
executing a machine learning model on the incident data, using real-time cost information for parts and repair, to determine an estimated cost of repairing the vehicle, the machine learning model being trained based on historical claim data associated with claims filed for prior vehicle incidents, the historical claim data including (i) user-indicated damage information identified by individual claimants interacting with a three-dimensional, virtual representation of their respective vehicle after a corresponding vehicle incident, to identify on or more regions of their vehicle that was damaged by a corresponding prior vehicle incident; (ii) image data, based on images captured by individual claimants of their respective vehicles after the respective vehicle is damaged by a corresponding prior vehicle incident; and (iii) payouts of the claims filed for the prior vehicle incidents
communicating the list of service providers to a computing device of the user while the user is at the scene of the vehicle incident
However, Wang 1, which is also directed towards collecting and providing images of a vehicle after an incident has occurred, further teaches that it is well-known in the art for the images to be three-dimensional images and explicitly teaches that such evidence is collected at the scene of the vehicle incident. Wang 1 teaches that traffic accidents have increased and disputes are not rare, while accident processing is not timely. Wang 1 teaches that in order to ensure that an accurate image is taken, the system utilizes a plurality of outlines to assist the user with capture an accurate image of the vehicle and damage. As a result, it would have been obvious and beneficial to guide, collect, guide, and provide a three-dimensional image of the vehicle while at the scene of the incident as this would greatly reduce resource and time cost and increase the speed in which detailed evidence is obtained, i.e. a guided three-dimensional vehicle image.
(For support see: Page 2, 1st paragraph in Background; Page 2, last paragraph; Page 6, ¶ 3, last paragraph; Page 7; Page 8, ¶ 1, 4, 7; Page 9, ¶ 1, 2, 6; Page 10, ¶ 7, 11, last paragraph)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to incorporate into the image evidenced based vehicle incident reporting system and method of Horstmann with the ability to guide a user to capture three-dimensional images of the vehicle incident while the user is at the incident, as taught by Wang 1, as this reduces cost, increases efficiency, increased accuracy, better representation of the damage, and provides more detailed information of the vehicle incident.
The combination of Horstmann and Wang 1 discloses a three-dimensional image evidenced based vehicle incident reporting system that allows for the capturing and transmission of three-dimensional vehicle incidents while at the scene of the incident in order to provide detailed information of the vehicle incident and streamlining the reporting and repair/salvage/towing process. Although the combination of Horstmann and Wang 1 discloses the use of three-dimensional images, the combination of Horstmann and Wang 1 fails to explicitly disclose whether it would have been obvious to rotate the three-dimensional image in order to more accurately identify damage.
To be more specific, the combination of Horstmann and Wang 1 fails to explicitly disclose:
receiving incident data corresponding to a vehicle incident involving the vehicle of the user, by providing, on a computing device of the user, a user interface for enabling the user to provide the incident data, the user interface being configured to include (i) a three-dimensional virtual representation of the vehicle involved in the vehicle incident, the user being able to rotate the three-dimensional virtual representation to graphically indicate one or more locations of the vehicle that were damaged by the vehicle incident, and (ii) an interface to receive images, captured by the user, of the vehicle after the vehicle incident
However, Wang 2, which is also directed towards capturing and providing a three-dimensional image of a vehicle incident, further teaches that it is well-known in the art to allow a user to rotate the 3D image. Wang 2 teaches that allowing a user to rotate or perform other types of manipulations with the 3D image allows the user to find the damage of the vehicle and assist with assessing the damage. Wang 2 teaches that model rotation allows for establishing the optimal viewing angle of each damage while cutting down on the settlement period by saving time in assessing the damage.
(For support see: Page 2, 1st paragraph in Background, last 2 paragraphs; Page 3 ¶ 7; Page 6 ¶ 1, 9, 10; Page 7 ¶ 3, last paragraph; Page 8, last paragraph)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to incorporate into the three-dimensional image evidenced based vehicle incident reporting system and method of Horstmann with the ability to allow a user to rotate the image, as taught by Wang 2, as this allows the user to more accurately find the damage of the vehicle and assist with assessing the damage of the vehicle, thereby cutting down on the settlement period by saving time in assessing the damage.
In regards to claims 2, 10, 18, the combination of Horstmann, Wang 1, and Wang 2 discloses the computing system of claim 1 (the non-transitory computer readable medium of claim 9; the method of claim 17), wherein the list of service providers comprises a ranked list of service providers based on a set of parameters associated with the service providers (¶ 37, 53, 54, 64, 65 wherein the list of service providers is ranked based on a set of parameters).
In regards to claims 3, 11, 19, the combination of Horstmann, Wang 1, and Wang 2 discloses the computing system of claim 2 (the non-transitory computer readable medium of claim 9; the method of claim 18), wherein the set of parameters comprise at least one of service provider cost, service provider ratings, service provider specialty, service provider qualifications, or service provider location (¶ 37, 53, 54, 64 wherein the set of parameters comprise, at least, cost, ratings/quality, specialty/qualifications (competence, specialized equipment, available technicians), and location).
In regards to claims 4, 12, 20, the combination of Horstmann, Wang 1, and Wang 2 discloses the computing system of claim 3 (the non-transitory computer readable medium of claim 11; the method of claim 19), wherein the ranked list of service providers is further generated by the trained machine learning model based on user-specific information of the user, the user-specific information comprising at least one of a home location of the user or demographic information of the user (¶ 54 wherein the list of service provides provided by the trained AI system is based on, at least, the user’s home location).
In regards to claims 5, 13, the combination of Horstmann, Wang 1, and Wang 2 discloses the computing system of claim 1 (the non-transitory computer readable medium of claim 9), wherein the list of service providers includes a towing service, and wherein the executed instructions cause the computing system to (Claim 5: communicate with a towing provider) (Claim 13: instruct the user) to have the vehicle towed (Claim 5: from a location of the vehicle incident) to (Claim 5: either) a repair shop (Claim 5: or a scrapyard or a salvage yard, based on the determination of whether the vehicle is repairable or totaled) (Claim 13: in the list of service providers based on the determined damage to the vehicle indicating that the vehicle is repairable) (¶ 54, 65 wherein the system provides the user with a list of service providers, which include a towing service, and instructs the user to have the vehicle towed to a repair shop (also included in the list of service providers) based on, at least, the damage done to the vehicle and its repairability;
¶ 13, 22, 24, 25, 30, 37, 39, 43, 48, 50, 60 wherein the user can utilize their user device to capture and transmit digital images of their vehicle, i.e. virtual representation of their vehicle, to allow the user to indicate the location of where the vehicle was damaged to allow the system to utilize artificial intelligence (AI) and image analysis to evaluate the condition the vehicle, which, in turn, allows the system to inform the user whether the vehicle is totaled, can be repaired, determines whether towing will be required, identify repair shops that can perform the repair, and etc.;
¶ 27, 32, 50, 54, 55, 65 wherein AI is trained and utilized to analyze the vehicle incident and generate a list of service providers, e.g., repair shops, salvage yards, towing companies, to facilitate handling of the vehicle, which is further based on whether the vehicle is repairable or totaled).
In regards to claims 7, 15, the combination of Horstmann, Wang 1, and Wang 2 discloses the computing system of claim 4 (the non-transitory computer readable medium of claim 12), wherein the executed instructions cause the computing system to (Claim 7: communicate) (Claim 15: provide) the ranked list of service providers to the computing device of the user (¶ 23, 37, 53, 54, 64, 65 wherein the list of service providers is ranked based on a set of parameters and provided to a user’s device).
In regards to claims 8, 16, the combination of Horstmann, Wang 1, and Wang 2 discloses the computing system of claim 7 (the non-transitory computer readable medium of claim 15), wherein the executed instructions further cause the computing system to: based on an authorization provided by the user and the determination that the vehicle is repairable, automatically schedule and coordinate service providers to repair the vehicle (¶ 43, 44, 49, 52, 55, 58, 64, 65 wherein the system, based on authorization from the user, automatically schedules and coordinates service for the vehicle to rectify the vehicle incident
¶ 13, 22, 24, 25, 30, 37, 39, 43, 48, 50, 60 wherein the user can utilize their user device to capture and transmit digital images of their vehicle, i.e. virtual representation of their vehicle, to allow the user to indicate the location of where the vehicle was damaged to allow the system to utilize machine learning/artificial intelligence (AI) and image analysis to evaluate the condition the vehicle, which, in turn, allows the system to inform the user whether the vehicle is totaled, can be repaired, determines whether towing will be required, identify repair shops that can perform the repair, and etc.).
______________________________________________________________________
Claim 14 are rejected under 35 U.S.C. 103 as being unpatentable over Horstmann (US PGPub 2023/0334438 A1) in view of Wang 1 (CN 110570513 A) in view of Wang 2 (CN 110570513 A) in further view of Ives et al. (US PGPub 20140081675 A1).
In regards to claim 14, the combination of Horstmann, Wang 1, and Wang 2 discloses a system and method that utilizes artificial intelligence (AI) to analyze a vehicle incident and facilitate handling of the vehicle with a repair shop, insurer, and towing service, as well as providing a determination on whether the vehicle can be repaired or is a total loss. Although the combination of Horstmann, Wang 1, and Wang 2 discloses that the system assists a user with coordinating with a towing service to tow the vehicle to a repair shop, inform the user(s) on whether the vehicle can be repaired or is a total loss, and coordinate with a salvage yard to assess the condition of the vehicle (¶ 61), the combination of Horstmann, Wang 1, and Wang 2 fails to explicitly disclose whether the system can assist with towing services to a salvage yard.
To be more specific, the combination of Horstmann, Wang 1, and Wang 2 fails to explicitly disclose:
the computing system of claim 1 (the non-transitory computer readable medium of claim 9), wherein the list of service providers includes a towing service, and wherein the executed instructions cause the computing system to instruct the user to have the vehicle towed to a scrapyard or salvage yard in the list of service providers based on the determined damage to the vehicle indicating that the vehicle is totaled.
However, Ives, which also utilizes machine learning to assist with a vehicle incident, further teaches that not only can a user be notified that a vehicle is a total loss, but to recommend and assist with facilitating towing to a salvage yard, especially if the vehicle is severely damaged. The combination of Horstmann, Wang 1, and Wang 2 already discloses that the system not only determines whether a vehicle is repairable or a total loss and identifying towing companies, but whether the vehicle can be safely operated or drivable (¶ 21, 45, 55) and, if not, coordinating with a towing service.
The sole difference between the combination of Horstmann, Wang 1, and Wang 2 and the claimed invention is that the combination of Horstmann, Wang 1, and Wang 2 does not explicitly disclose whether to coordinate with a towing service to tow a vehicle when the system has determined that the vehicle is a total loss or cannot be safely operated/driven. Although one of ordinary skill in the art would have found that a towing service would have obviously been contacted, especially in light of the goals and objectives of the combination of Horstmann, Wang 1, and Wang 2, the combination of Horstmann, Wang 1, and Wang 2 does not explicitly disclose this process. As a result, one of ordinary skill in the art would have been motivated to look to the teachings of Ives to determine whether it would have been obvious to coordinate with a towing service to have a vehicle towed to a salvage yard. Similar to the combination of Horstmann, Wang 1, and Wang 2, Ives teaches that the system can determine that a vehicle is highly damaged, i.e. unsafe to operate/drive, and, consequently, would have found it obvious to coordinate with a towing service to tow the vehicle to a salvage yard. Also similar to the combination of Horstmann, Wang 1, and Wang 2, Ives teaches that once the vehicle is at a salvage yard (or a repair shop), a user can provide an additional assessment on the state of the vehicle to determine how it should be handled. One of ordinary skill in the art would have found it obvious and beneficial to tow a vehicle to a salvage yard when the vehicle has been determined to be highly damaged or a total loss as this prevents the vehicle from being driven in such an unsafe condition. One of ordinary skill in the art would have also found it obvious that towing a vehicle to a salvage yard when it has been determined that the vehicle cannot be repair would be a more efficient handling of a vehicle as this saves time and money, i.e. time and money would not have to be wasted to first tow a vehicle to a repair shop and then to a salvage yard when it has already been determined that the vehicle is a total loss.
(For support see: ¶ 54, 55, 64, 74, 94, 95, 185)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to incorporate into the vehicle incident assessment and handling system and method of the combination of Horstmann, Wang 1, and Wang 2 with the ability to coordinate and facilitate towing of a highly damaged and unsafe vehicle to a salvage yard, as taught by Ives, as this would be a more efficient use of time and money by directly towing a total loss to a salvage yard while also ensuring that an unsafe vehicle is not operated/driven.
Response to Arguments
Applicant's arguments filed 2/5/2026 have been fully considered but they are not persuasive.
Rejection under 35 USC 101
The rejection under 35 USC 101 has been withdrawn.
The claimed invention integrates itself into a practical application because it improves upon image-based evidence taking of a vehicle by providing a device that guides the user to capture an image of the vehicle and damage at the correct capturing position to ensure that an accurate three-dimensional image of the damaged vehicle. The device monitors the image that a user is about to capture and guides them, before the image is capture, to ensure that a proper image is captured.
Rejection under 35 USC 102/103
The Examiner asserts that the applicant’s arguments are directed towards newly amended limitations and are, therefore, considered moot. However, the Examiner has responded to the newly submitted amendments, which the arguments are directed to, in the rejection above, thereby addressing the applicant’s arguments.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure can be found in the attached PTO-892 Notice of References Cited.
Brandmaier et al. (US Patent 10,572,944 B1); Zhou (CN 108632530 B); Veliche (US Patent 10,679,301 B1); Zhang et al. (WO 2020/042800 A1); Chatfield et al. (US PGPub 2022/0138860 A1) – which disclose taking images of a vehicle after a vehicle incident
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GERARDO ARAQUE JR whose telephone number is (571)272-3747. The examiner can normally be reached Monday - Friday 8-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sarah Monfeldt can be reached at 571-270-1833. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
GERARDO ARAQUE JR
Primary Examiner
Art Unit 3629
/GERARDO ARAQUE JR/Primary Examiner, Art Unit 3629 2/27/2026