Prosecution Insights
Last updated: April 19, 2026
Application No. 17/949,709

CONTEXT AWARE VEHICLE-BASED PROJECTION SYSTEM

Final Rejection §103§112
Filed
Sep 21, 2022
Examiner
WILLIS, BRANDON Z.
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
International Business Machines Corporation
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
140 granted / 203 resolved
+17.0% vs TC avg
Strong +38% interview lift
Without
With
+38.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
23 currently pending
Career history
226
Total Applications
across all art units

Statute-Specific Performance

§101
11.3%
-28.7% vs TC avg
§103
48.3%
+8.3% vs TC avg
§102
27.3%
-12.7% vs TC avg
§112
9.1%
-30.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 203 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claims 1, 13, and 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Objections Claims 2, 4, 6, and 10 are objected to because of the following informalities: In claims 2, 4, 6, and 10, each instance of “an occupant” should read “the occupant”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claim 16 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claim 16 recites the limitation “wherein the projection notifies persons outside the vehicle to seek help for the driver” which is previously recited in claim 13 from which claim 16 depends, and therefore does not further limit the subject matter of claim 13. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-6, 13, 14, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Frimpong (U.S. Patent No. 10845693; hereinafter Frimpong) and further in view of Suthar et al. (U.S. Publication No. 2021/0086778; hereinafter Suthar). Regarding claim 1, Frimpong teaches a computer-implemented method, comprising: monitoring contextual information of a vehicle during operation thereof (Frimpong: Col. 7, lines 34-37; i.e., at step 210, the data acquisition interface 1042 receives data corresponding to the vehicle 102, the data being received for the plurality of attributes); Determining, based on the contextual information, that a condition is met to project, by a vehicle-based projection system mounted to the vehicle, a projection indicative of a contextual condition associated with the vehicle (Frimpong: Col. 7, lines 38-40; i.e., at step 220, the control unit 1044 determines the one or more of a previous, a current and a future state of the vehicle 102 from the data; Col. 6, lines 37-39; i.e., the hologram 1050 displays … “TURNIN RIGHT IN 50 m” (future state), of the vehicle 102; the system determines a projection condition is met, such as an upcoming turn); and in response to the determination that the condition is met, projecting the projection indicative of the contextual condition (Frimpong: Col. 7, lines 40-43; i.e., at step 230, the projector unit 1046 displays the one or more of the previous, the current and the future state of the vehicle 102, in form of a hologram 1050 projected through the exterior of the vehicle 102). Frimpong does not explicitly teach determining that the condition is met based on the contextual information using artificial intelligence; receiving feedback from an occupant of the vehicle; and improving the determining based on the feedback from the occupant. However, in the same field of endeavor, Suthar teaches determining that the condition is met based on the contextual information using artificial intelligence (Suthar: Par. 69; i.e., the machine learning engine 206 may … be configured to perform one or more operations for analyzing the sensor data and other information associated with the historical in-vehicle emergencies); receiving feedback from an occupant of the vehicle; and improving the determining based on the feedback from the occupant (Suthar: Par. 47; i.e., the application server 106 may be configured to enable the occupants 110 and 112 to provide a feedback, via the user device 102a or any other device, regarding the in-vehicle emergency and the emergency assistance; Par. 46; i.e., the application server 106 may apply one or more supervised learning techniques to use the stored details for efficient handling of future in-vehicle emergencies). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Frimpong to have further incorporated determining that the condition is met based on the contextual information using artificial intelligence; receiving feedback from an occupant of the vehicle; and improving the determining based on the feedback from the occupant, as taught by Suthar. Doing so would make vehicle emergency detection more efficient and accurate (Suthar: Par. 27; i.e., the methods and systems significantly improve efficiency and accuracy of in-vehicle emergency assistance). Regarding claim 2, Frimpong in view of Suthar teaches the method according to claim 1. Frimpong further teaches wherein the contextual information includes input received directly from an occupant of the vehicle (Frimpong: Col. 6, lines 46-48; i.e., the microphone 1048 is configured to receive a voice input from the driver and transmit the voice input to the control unit 1044; Col. 6, lines 58-60; i.e., the microphone 1048 may also be configured to receive the custom message, such as “THANK YOU”, for inclusion in the hologram 1050 by the projector unit 1046). Regarding claim 3, Frimpong in view of Suthar teaches the method according to claim 2. Frimpong further teaches wherein the projection is manually defined by the occupant (Frimpong: Col. 6, lines 58-60; i.e., the microphone 1048 may also be configured to receive the custom message, such as “THANK YOU”, for inclusion in the hologram 1050 by the projector unit 1046). Regarding claim 4, Frimpong in view of Suthar teaches the method according to claim 3. Frimpong further teaches wherein the projection includes text input by the occupant (Frimpong: Col. 6, lines 58-60; i.e., the microphone 1048 may also be configured to receive the custom message, such as “THANK YOU”, for inclusion in the hologram 1050 by the projector unit 1046). Regarding claim 5, Frimpong in view of Suthar teaches the method according to claim 1. Suthar further teaches wherein monitoring the contextual information includes monitoring a physiology of an occupant of the vehicle, wherein the contextual information includes information indicative of an adverse condition occurring with a driver of the vehicle. (Suthar: Par. 37; i.e., the in-vehicle emergency associated with the any of the occupants 110 and 112 may be one of a medical emergency; Par. 48; i.e., the context of the detected in-vehicle emergency may be determined based on one or more parameters including … medical factors associated with the occupants 110 and 112; Par. 57; i.e., the application server 106 may determine … the medical factors ... based on analysis and processing of the sensor data). Regarding claim 6, Frimpong in view of Suthar teaches the method according to claim 5. Frimpong further teaches wherein monitoring the contextual information includes monitoring a voice of an occupant of the vehicle, wherein determining that the condition is met includes performing voice recognition on the voice of the occupant, and determining that a result of the voice recognition meets the condition, wherein the projection is indicative of the result of the voice recognition (Frimpong: Col. 6, lines 46-48; i.e., the microphone 1048 is configured to receive a voice input from the driver and transmit the voice input to the control unit 1044; Col. 6, lines 58-60; i.e., the microphone 1048 may also be configured to receive the custom message, such as “THANK YOU”, for inclusion in the hologram 1050 by the projector unit 1046). Regarding claim 13, Frimpong teaches a computer program product for creating a contextually appropriate projection adjacent a vehicle, the computer program product comprising: one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media, the program instructions comprising (Frimpong: Col. 8, lines 35-37; i.e., the methods described herein may be implemented by a series of computer executable instructions residing on a suitable computer readable medium): program instructions to monitor contextual information of a vehicle during operation thereof (Frimpong: Col. 7, lines 34-37; i.e., at step 210, the data acquisition interface 1042 receives data corresponding to the vehicle 102, the data being received for the plurality of attributes); program instructions to determine that a condition is met to project, by a vehicle- based projection system mounted to the vehicle, a projection indicative of a contextual condition associated with the vehicle (Frimpong: Col. 7, lines 38-40; i.e., at step 220, the control unit 1044 determines the one or more of a previous, a current and a future state of the vehicle 102 from the data; Col. 6, lines 37-39; i.e., the hologram 1050 displays … “TURNIN RIGHT IN 50 m” (future state), of the vehicle 102; the system determines a projection condition is met, such as an upcoming turn); and program instructions to project the projection indicative of the contextual condition in response to the determination that the condition is met (Frimpong: Col. 7, lines 40-43; i.e., at step 230, the projector unit 1046 displays the one or more of the previous, the current and the future state of the vehicle 102, in form of a hologram 1050 projected through the exterior of the vehicle 102). Frimpong does not explicitly teach wherein the contextual information includes information indicative of an adverse condition occurring with a driver of the vehicle; and the projection notifying persons outside of the vehicle to seek help for the driver. However, in the same field of endeavor, Suthar teaches wherein the contextual information includes information indicative of an adverse condition occurring with a driver of the vehicle (Suthar: Par. 37; i.e., the in-vehicle emergency associated with the any of the occupants 110 and 112 may be one of a medical emergency; Par. 48; i.e., the context of the detected in-vehicle emergency may be determined based on one or more parameters including … medical factors associated with the occupants 110 and 112; Par. 57; i.e., the application server 106 may determine … the medical factors ... based on analysis and processing of the sensor data); and the projection notifying persons outside of the vehicle to seek help for the driver (Suthar: Par. 61; i.e., the application server 106 may be further configured to displaying via the external vehicular display of the vehicle 102 textual and graphical content indicating the in-vehicle emergency; Par. 92; i.e., the external vehicular display … displays the emergency message to alert the passer-by individuals; Par. 112; i.e., responses may include … displaying textual and graphical content (for example, an emergency message such as “NEED HELP”) on the external vehicular display). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the computer program product of Frimpong to have further incorporated wherein the contextual information includes information indicative of an adverse condition occurring with a driver of the vehicle; and the projection notifying persons outside of the vehicle to seek help for the driver, as taught by Suthar. Doing so would allow passer-by individuals to reach the driver and provide assistance (Suthar: Par. 113; i.e., with such effective and efficient way of communication, the various entities (such as the drivers 606a-606c, the passer-by individuals 608a-608f, or the response teams) may reach the incident location to help the occupant). Regarding claim 14, Frimpong in view of Suthar teaches the computer program product according to claim 13. Frimpong further teaches wherein the contextual information includes input received directly from an occupant of the vehicle (Frimpong: Col. 6, lines 46-48; i.e., the microphone 1048 is configured to receive a voice input from the driver and transmit the voice input to the control unit 1044; Col. 6, lines 58-60; i.e., the microphone 1048 may also be configured to receive the custom message, such as “THANK YOU”, for inclusion in the hologram 1050 by the projector unit 1046). Regarding claim 16, Frimpong in view of Suthar teaches the computer program product according to claim 13. Suthar further teaches wherein the projection notifies persons outside of the vehicle to seek help for the driver (Suthar: Par. 61; i.e., the application server 106 may be further configured to displaying via the external vehicular display of the vehicle 102 textual and graphical content indicating the in-vehicle emergency; Par. 92; i.e., the external vehicular display … displays the emergency message to alert the passer-by individuals; Par. 112; i.e., responses may include … displaying textual and graphical content (for example, an emergency message such as “NEED HELP”) on the external vehicular display). Claims 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Frimpong in view of Suthar and further in view of Marti (U.S. Publication No. 2015/0203023; hereinafter Marti). Regarding claim 7, Frimpong in view of Suthar teaches the method according to claim 1, but does not explicitly teach wherein monitoring the contextual information includes monitoring driving behavior of the vehicle, wherein the projection includes an indication of a boundary around the vehicle calculated based on the monitored driving behavior. However, in the same field of endeavor, Marti teaches wherein monitoring the contextual information includes monitoring driving behavior of the vehicle (Marti: Par. 42; i.e., roadway projection system 100 may project any of the images discussed thus far in response to detecting implicit behaviors of driver 130), wherein the projection includes an indication of a boundary around the vehicle calculated based on the monitored driving behavior (Marti: Par. 52; i.e., if roadway projection system 100 determines that vehicle 100 has come to a complete stop due to heavy traffic, roadway projection system 100 could contract bounding zone 341. In general, roadway projection system 100 may dynamically determine the shape and size of bounding zone 341 based on operating conditions associated with vehicle 110). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Frimpong and Suthar to have further incorporated wherein monitoring the contextual information includes monitoring driving behavior of the vehicle, wherein the projection includes an indication of a boundary around the vehicle calculated based on the monitored driving behavior, as taught by Marti. Doing so would allow other vehicles to identify the projected boundary and avoid collisions (Marti: Par. 70; i.e., roadway projection system 100 operates as part of a vehicle-to-vehicle driving system 700 to avoid collisions with other vehicles). Regarding claim 15, Frimpong in view of Suthar teaches the computer program product according to claim 13, but does not explicitly teach wherein monitoring the contextual information includes monitoring driving behavior of the vehicle, wherein the projection includes an indication of a boundary around the vehicle calculated based on the monitored driving behavior. However, in the same field of endeavor, Marti teaches wherein monitoring the contextual information includes monitoring driving behavior of the vehicle (Marti: Par. 42; i.e., roadway projection system 100 may project any of the images discussed thus far in response to detecting implicit behaviors of driver 130), wherein the projection includes an indication of a boundary around the vehicle calculated based on the monitored driving behavior (Marti: Par. 52; i.e., if roadway projection system 100 determines that vehicle 100 has come to a complete stop due to heavy traffic, roadway projection system 100 could contract bounding zone 341. In general, roadway projection system 100 may dynamically determine the shape and size of bounding zone 341 based on operating conditions associated with vehicle 110). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the computer program product of Frimpong and Suthar to have further incorporated wherein monitoring the contextual information includes monitoring driving behavior of the vehicle, wherein the projection includes an indication of a boundary around the vehicle calculated based on the monitored driving behavior, as taught by Marti. Doing so would allow other vehicles to identify the projected boundary and avoid collisions (Marti: Par. 70; i.e., roadway projection system 100 operates as part of a vehicle-to-vehicle driving system 700 to avoid collisions with other vehicles). Claims 8 is rejected under 35 U.S.C. 103 as being unpatentable over Frimpong in view of Suthar and further in view of De Mola (U.S. Publication No. 2023/0303031; hereinafter De Mola). Regarding claim 8, Frimpong in view of Suthar teaches the method according to claim 1, but does not teach wherein determining that the condition is met includes determining that the vehicle is stolen. However, in the same field of endeavor, De Mola teaches wherein determining that the condition is met includes determining that the vehicle is stolen (De Mola: Par. 22; i.e., a vehicle security device 116 supervises the security of the vehicle with different event sensors 118 that monitor the vehicle integrity and detect all attempts of a theft; Par. 28; i.e., In case of theft, the optional locator lights on the vehicle's side may project the information “stolen car” on the road surface). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Frimpong and Suthar to have further incorporated wherein determining that the condition is met includes determining that the vehicle is stolen, as taught by De Mola. Doing so would notify nearby witnesses that the vehicle is stolen and could encourage the witnesses to notify authorities (De Mola: Par. 20; i.e., notifies to the surrounding environment that a theft is being attempted, while the stolen vehicle is moving, which could help to create more awareness of the on-going crime and could motivate witnesses to act to notify police or take other measures to foil the theft). Claims 9 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Frimpong in view of Suthar and further in view of Hakki et al. (U.S. Publication No. 2019/0329708; hereinafter Hakki). Regarding claim 9, Frimpong in view of Suthar teaches the method according to claim 1. Frimpong further teaches wherein contextual information includes vehicle speed (Frimpong: Col. 6, lines 36-38; i.e., as can be seen from FIG. 1B, the hologram 1050 displays a current speed “52 mph”). Frimpong does not explicitly teach wherein the contextual information includes a weather condition, wherein the projection includes an indication of an estimated distance for the vehicle to stop upon application of brakes thereof. However, in the same field of endeavor, Hakki teaches wherein the contextual information includes a weather condition (Hakki: Par. 79; i.e., the on-board computer 12 communicates with various input devices or sensors to obtain information regarding … road conditions/weather), wherein the projection includes an indication of an estimated distance for the vehicle to stop upon application of brakes thereof (Hakki: Par. 66; i.e., the processor then controls the one or more projectors 60 to project an image or outline on the pavement… For example, if based upon the stored data, the stopping distance of the index vehicle 5 at 60 miles per hour is 80 feet on dry pavement, and then the front safety zone 210 will be projected on the road, occupying approximately 80 feet in front of the index vehicle). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Frimpong and Suthar to have further incorporated wherein the contextual information includes a weather condition, wherein the projection includes an indication of an estimated distance for the vehicle to stop upon application of brakes thereof, as taught by Hakki. Doing so would aid surrounding vehicles in identifying safe paths and speeds to avoid collisions (Hakki: Par. 23; i.e., The flat image or holographic image will be an aid to all vehicles indicative of safe paths and speeds). Regarding claim 17, Frimpong in view of Suthar teaches the computer program product according to claim 13. Frimpong further teaches wherein contextual information includes vehicle speed (Frimpong: Col. 6, lines 36-38; i.e., as can be seen from FIG. 1B, the hologram 1050 displays a current speed “52 mph”). Frimpong does not explicitly teach wherein the contextual information includes a weather condition, wherein the projection includes an indication of an estimated distance for the vehicle to stop upon application of brakes thereof. However, in the same field of endeavor, Hakki teaches wherein the contextual information includes a weather condition (Hakki: Par. 79; i.e., the on-board computer 12 communicates with various input devices or sensors to obtain information regarding … road conditions/weather), wherein the projection includes an indication of an estimated distance for the vehicle to stop upon application of brakes thereof (Hakki: Par. 66; i.e., the processor then controls the one or more projectors 60 to project an image or outline on the pavement… For example, if based upon the stored data, the stopping distance of the index vehicle 5 at 60 miles per hour is 80 feet on dry pavement, and then the front safety zone 210 will be projected on the road, occupying approximately 80 feet in front of the index vehicle). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the computer program product of Frimpong and Suthar to have further incorporated wherein the contextual information includes a weather condition, wherein the projection includes an indication of an estimated distance for the vehicle to stop upon application of brakes thereof, as taught by Hakki. Doing so would aid surrounding vehicles in identifying safe paths and speeds to avoid collisions (Hakki: Par. 23; i.e., The flat image or holographic image will be an aid to all vehicles indicative of safe paths and speeds). Claims 10 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Frimpong in view of Suthar and further in view of Miyahara et al. (U.S. Publication No. 2020/0058222; hereinafter Miyahara). Regarding claim 10, Frimpong in view of Suthar teaches the method according to claim 1, but does not explicitly teach outputting, to an occupant of the vehicle, a content item to project, wherein determining that a condition is met to project includes receiving a selection of the content item from the occupant, wherein the selected content item is projected. However, in the same field of endeavor, Miyahara teaches outputting, to an occupant of the vehicle, a content item to project, wherein determining that a condition is met to project includes receiving a selection of the content item from the occupant (Miyahara: Par. 67; i.e., the driver inputs his/her intention by selecting any input message from a list of input messages displayed on the screen), wherein the selected content item is projected (Miyahara: Par. 70; i.e., in step S109, the road projection controller 5 controls the road projector 21 to project the notification message converted by the message converting unit 4 onto at least part of a road located around the object). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Frimpong and Suthar to have further incorporated outputting, to an occupant of the vehicle, a content item to project, wherein determining that a condition is met to project includes receiving a selection of the content item from the occupant, wherein the selected content item is projected, as taught by Miyahara. Doing so would allow the system to project simplified messages (Miyahara: Par. 69; i.e., the message converting unit 4 converts the concept of the input message into a notification message that is easy for the child to understand, such as a symbol indicating “AFTER YOU”). Regarding claim 18, Frimpong in view of Suthar teaches the computer program product according to claim 13, but does not explicitly teach output, to an occupant of the vehicle, a content item to project, wherein determining that a condition is met to project includes receiving a selection of the content item from the occupant, wherein the selected content item is projected. However, in the same field of endeavor, Miyahara teaches output, to an occupant of the vehicle, a content item to project, wherein determining that a condition is met to project includes receiving a selection of the content item from the occupant (Miyahara: Par. 67; i.e., the driver inputs his/her intention by selecting any input message from a list of input messages displayed on the screen), wherein the selected content item is projected (Miyahara: Par. 70; i.e., in step S109, the road projection controller 5 controls the road projector 21 to project the notification message converted by the message converting unit 4 onto at least part of a road located around the object). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the computer program product of Frimpong and Suthar to have further incorporated output, to an occupant of the vehicle, a content item to project, wherein determining that a condition is met to project includes receiving a selection of the content item from the occupant, wherein the selected content item is projected, as taught by Miyahara. Doing so would allow the system to project simplified messages (Miyahara: Par. 69; i.e., the message converting unit 4 converts the concept of the input message into a notification message that is easy for the child to understand, such as a symbol indicating “AFTER YOU”). Claims 11 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Frimpong in view of Suthar and further in view of Yan (U.S. Patent No. 9994147; hereinafter Yan). Regarding claim 11, Frimpong in view of Suthar teaches the method according to claim 1, but does not teach wherein monitoring the contextual information includes detecting a location of a second vehicle in relation to the vehicle, wherein the condition includes the vehicles being closer than a minimum safe distance between the vehicles, and wherein the projection includes an indication of the minimum safe distance from the vehicle. However, in the same field of endeavor, Yan teaches wherein monitoring the contextual information includes detecting a location of a second vehicle in relation to the vehicle (Yan: Col. 2, lines 10-13; i.e., The proximity sensors 22 can be any suitable sensors configured for measuring distance between the vehicle 10 and surrounding vehicles, such as a following vehicle 120), wherein the condition includes the vehicles being closer than a minimum safe distance between the vehicles (Yan: Col. 4, lines 11-16; i.e., the threshold safety distance Z can be any suitable distance, such as any suitable predetermined distance, whereby when the actual distance X is less than the threshold safety distance Z the control module 40 instructs the projector 50 to project the projected image 70A, or any other suitable image), and wherein the projection includes an indication of the minimum safe distance from the vehicle (Yan: Col. 5, lines 47-53; i.e., the lead vehicle 10 projects projected image 70A, which is a virtual, three-dimensional, image of a rear of the lead vehicle 10 projected above the road surface. The projected image 70A results in the driver of the following vehicle 110 seeing perceived distance Y between the vehicles 10 and 110, which is less than actual distance X). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Frimpong and Suthar to have further incorporated wherein monitoring the contextual information includes detecting a location of a second vehicle in relation to the vehicle, wherein the condition includes the vehicles being closer than a minimum safe distance between the vehicles, and wherein the projection includes an indication of the minimum safe distance from the vehicle, as taught by Yan. Doing so would result in trailing vehicles stopping a safe distance away from the vehicle (Yan: Col. 5, lines 53-55; i.e., Therefore, the driver of the following vehicle 110 will likely stop a safe distance from the vehicle 10). Regarding claim 19, Frimpong in view of Suthar teaches the computer program product according to claim 13, but does not teach wherein monitoring the contextual information includes detecting a location of a second vehicle in relation to the vehicle, wherein the condition includes the vehicles being closer than a minimum safe distance between the vehicles, and wherein the projection includes an indication of the minimum safe distance from the vehicle. However, in the same field of endeavor, Yan teaches wherein monitoring the contextual information includes detecting a location of a second vehicle in relation to the vehicle (Yan: Col. 2, lines 10-13; i.e., The proximity sensors 22 can be any suitable sensors configured for measuring distance between the vehicle 10 and surrounding vehicles, such as a following vehicle 120), wherein the condition includes the vehicles being closer than a minimum safe distance between the vehicles (Yan: Col. 4, lines 11-16; i.e., the threshold safety distance Z can be any suitable distance, such as any suitable predetermined distance, whereby when the actual distance X is less than the threshold safety distance Z the control module 40 instructs the projector 50 to project the projected image 70A, or any other suitable image), and wherein the projection includes an indication of the minimum safe distance from the vehicle (Yan: Col. 5, lines 47-53; i.e., the lead vehicle 10 projects projected image 70A, which is a virtual, three-dimensional, image of a rear of the lead vehicle 10 projected above the road surface. The projected image 70A results in the driver of the following vehicle 110 seeing perceived distance Y between the vehicles 10 and 110, which is less than actual distance X). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the computer program product of Frimpong and Suthar to have further incorporated wherein monitoring the contextual information includes detecting a location of a second vehicle in relation to the vehicle, wherein the condition includes the vehicles being closer than a minimum safe distance between the vehicles, and wherein the projection includes an indication of the minimum safe distance from the vehicle, as taught by Yan. Doing so would result in trailing vehicles stopping a safe distance away from the vehicle (Yan: Col. 5, lines 53-55; i.e., Therefore, the driver of the following vehicle 110 will likely stop a safe distance from the vehicle 10). Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Frimpong in view of Suthar and further in view of Yan and Marti. Regarding claim 12, Frimpong in view of Suthar teaches the method according to claim 1, but does not explicitly teach wherein monitoring the contextual information includes detecting a weather condition surrounding the vehicle, wherein the condition includes the weather condition matching a predefined condition. However, in the same field of endeavor, Yan teaches wherein monitoring the contextual information includes detecting a weather condition surrounding the vehicle (Yan: Col. 2, lines 26-29; i.e., the road condition sensor 26 can be a sensor configured to sense when tires of the vehicle 10 slip, thereby indicating that the vehicle 10 is traveling across a slick surface, such as a snow covered road, a wet road), wherein the condition includes the weather condition matching a predefined condition (Yan: Col. 4, lines 36-39; i.e., when the road condition sensor 26 determines that the road conditions are slippery or otherwise poor, thus resulting in increased stopping distances). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Frimpong and Suthar to have further incorporated wherein monitoring the contextual information includes detecting a weather condition surrounding the vehicle, wherein the condition includes the weather condition matching a predefined condition, as taught by Yan. Doing so would allow the system to adjust the threshold safety distance based on the weather condition (Yan: Col. 4, lines 33-37; i.e., the control module 40 can also increase the threshold safety distance Z, as well as the distance that the projected image 70A is projected from the lead vehicle). Frimpong, Suthar, and Yan do not explicitly teach wherein the projection includes an indication of a boundary around the vehicle. However, in the same field of endeavor, Marti teaches wherein the projection includes an indication of a boundary around the vehicle (Marti: Par. 50; i.e., roadway projection system 100 projects a bounding zone 341 that surrounds vehicle 110. Bounding zone 341 represents a protected territory around vehicle 110 that driver 130 wishes to prevent other vehicles from entering). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Frimpong, Suthar, and Yan to have further incorporated wherein the projection includes an indication of a boundary around the vehicle, as taught by Marti. Doing so would allow other vehicles to identify the projected boundary and avoid collisions (Marti: Par. 70; i.e., roadway projection system 100 operates as part of a vehicle-to-vehicle driving system 700 to avoid collisions with other vehicles). Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Frimpong in view of Wu et al. (U.S. Publication No. 2011/0307144; hereinafter Wu). Regarding claim 20, Frimpong teaches a vehicle, comprising: a computer; one or more projectors (Frimpong: Col. 4, lines 24-35; i.e., a vehicle 102 with a vehicle information device 104 installed… the vehicle information device 104 is envisaged to include … a control unit 1044 and a projector unit 1046); one or more monitoring devices (Frimpong: Col. 4, lines 44-45; i.e., the data acquisition interface 1042 is configured to receive data corresponding to the vehicle); and logic integrated with the computer, executable by the computer, or integrated with and executable by the computer, the logic being configured to (Frimpong: Col. 4, lines 4-6; i.e., a vehicle that collects data pertaining to the vehicle and uses internal logic to determine previous, present and future state of the vehicle): monitor contextual information of the vehicle during operation thereof (Frimpong: Col. 7, lines 34-37; i.e., at step 210, the data acquisition interface 1042 receives data corresponding to the vehicle 102, the data being received for the plurality of attributes); determine that a condition is met to project, by one or more of the projectors, a projection indicative of a contextual condition associated with the vehicle (Frimpong: Col. 7, lines 38-40; i.e., at step 220, the control unit 1044 determines the one or more of a previous, a current and a future state of the vehicle 102 from the data; Col. 6, lines 37-39; i.e., the hologram 1050 displays … “TURNIN RIGHT IN 50 m” (future state), of the vehicle 102; the system determines a projection condition is met, such as an upcoming turn); and in response to the determination that the condition is met, project the projection indicative of the contextual condition (Frimpong: Col. 7, lines 40-43; i.e., at step 230, the projector unit 1046 displays the one or more of the previous, the current and the future state of the vehicle 102, in form of a hologram 1050 projected through the exterior of the vehicle 102). Frimpong does not explicitly teach at least some of the contextual information being received from a steering wheel-mounted physiology sensor. However, in the same field of endeavor, Wu teaches at least some of the contextual information being received from a steering wheel-mounted physiology sensor (Wu: par. 24; i.e., there are bio sensors fitted on the wheel of the vehicle for recording physiological statuses of the driver, those recorded physiological statuses can also be included in the trip records of the vehicle). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the vehicle of Frimpong to have further incorporated at least some of the contextual information being received from a steering wheel-mounted physiology sensor, as taught by Wu. Doing so would prevent danger to the vehicle passengers due to a driver health condition (Wu: Par. 24; i.e., the physiology of the driver can be monitored so as to prevent the passengers from any danger caused by the health condition of the driver). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRANDON Z WILLIS whose telephone number is (571)272-5427. The examiner can normally be reached Weekdays 8:00-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin D. Bishop can be reached at (571) 270-3713. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRANDON Z WILLIS/Examiner, Art Unit 3665
Read full office action

Prosecution Timeline

Sep 21, 2022
Application Filed
Feb 08, 2024
Response after Non-Final Action
Nov 04, 2025
Non-Final Rejection — §103, §112
Jan 22, 2026
Interview Requested
Feb 02, 2026
Examiner Interview Summary
Feb 02, 2026
Applicant Interview (Telephonic)
Feb 04, 2026
Response Filed
Mar 19, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602931
IDENTIFICATION OF UNKNOWN TRAFFIC OBJECTS
2y 5m to grant Granted Apr 14, 2026
Patent 12589767
SYSTEMS AND METHODS FOR GENERATING A DRIVING TRAJECTORY
2y 5m to grant Granted Mar 31, 2026
Patent 12545299
DYNAMICALLY WEIGHTING TRAINING DATA USING KINEMATIC COMPARISON
2y 5m to grant Granted Feb 10, 2026
Patent 12534072
TRANSPORT DANGEROUS SITUATION CONSENSUS
2y 5m to grant Granted Jan 27, 2026
Patent 12528483
METHOD, ELECTRONIC DEVICE AND MEDIUM FOR TARGET STATE ESTIMATION
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+38.3%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 203 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month