Prosecution Insights
Last updated: April 19, 2026
Application No. 18/599,076

HUMAN-CENTRIC VEHICULAR METAVERSE PLATFORM FOR ROAD-SIDE AR/VR CONTENT DELIVERY

Final Rejection §103§112
Filed
Mar 07, 2024
Examiner
MAZUMDER, SAPTARSHI
Art Unit
2612
Tech Center
2600 — Communications
Assignee
UNIVERSITY OF SOUTHERN CALIFORNIA
OA Round
2 (Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
2y 8m
To Grant
76%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
241 granted / 375 resolved
+2.3% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
27 currently pending
Career history
402
Total Applications
across all art units

Statute-Specific Performance

§101
10.2%
-29.8% vs TC avg
§103
50.6%
+10.6% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 375 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 9 and 11 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 9 recites, “the level of detail is selected based on a distance between a physical target and a viewing device.” The phrase “the level of detail” has a lack of antecedent basis. As a result the limitation is indefinite. Claim 11 is also rejected by virtue of dependency. Claim 11 recites “including by including by converting from a Cartesian coordinate…”. Here the sentence is indefinite because it is not clear whether it is incomplete or something is missing. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 4-5, 7, 12-13, 15-16 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sibley (US Patent No.10242457 “Sibley”) in view of Piemonte et al. (US Pat. Pub. No. 20210166490 “Piemonte”). Regarding claim 12 Sibley teaches A system (Fig. 8) comprising: one or more sensors configured to obtain sensor data for a vehicle (Col 10 lines 40-44 “The vehicle 110 can be equipped with a number and wide variety of sensors 108. These sensors 108 may enable various systems on the vehicle 110 to detect, for example, acceleration, deceleration, wheel slip, and turning of the vehicle 110”); and a processor that is coupled to the one or more sensors (Fig. 8 element 820 and 832) and that is configured to at least facilitate: selecting, using the sensor data, a physical target that comprises a physical element in proximity to the vehicle as the vehicle is travelling (Col 6 lines 20-38 “In some examples, when identifying objects that are proximate to the vehicle, the system can utilize the pose estimation and/or the gaze detection to identify an object of interest for the passenger…….For another instance, the passenger may be looking through the translucent display at a specific object that is located in the environment outside of the vehicle”) based on a rendering of an image from an aerial map data in addition to a head pose of a driver of the vehicle, the head pose including a position and orientation of a head of the driver (Col 3 lines 23-25 “determine a pose of a head of the passenger, where the pose can include the three-dimensional location (e.g., (X,Y,Z) coordinates) and/or orientation of the head of the passenger within the vehicle. Col 6 lines 38-42 “ To identify the specific object, the system can utilize the map data, information about the object (e.g. its size, position relative to the vehicle, classification, etc), the pose of the passenger relative to the vehicle, and the gaze detection to determine that the passenger is focusing on the specific object”. Col 9 lines 57-60 “. In some instances, map data may be associated with topologic maps, metric maps, semantic maps, street maps, rail maps, weather maps, GPS data, satellite images, geographic information, street-view images, coordinates, points of interest”) but is silent about based on a rendering of an image from an aerial mesh; Piemonte teaches selecting, using sensor data, a target that comprises a physical element in proximity to the vehicle as the vehicle is travelling based on a rendering of an image from an aerial mesh (Fig. 4 shows visualization. “[0012] The pre-generated 3D mesh map data may be available for the entire real environment, 360° around the vehicle, behind occlusions, and beyond the horizon. Thus, in some embodiments, the 3D mesh map data may be leveraged to provide information about the environment, including objects that are not visible, to the sides and behind the vehicle. 0038]…… Elements 1010 and 1020 may, for example, be performed by a network-based 3D data system including one or more computing systems that collect images (e.g., aerial and street photography from previous collections, images captured by vehicles equipped with instances of the AR system and/or video or still cameras, images captured by personal devices such as smartphones or tablets, etc.), stereographically reconstruct and otherwise process the images to generate data including 3D mesh maps of surfaces. [0041] As indicated at 1040, the AR system may query sensor data according to the fetched, pre-generated 3D data to obtain 3D information for the local environment within the range of the sensors”). Piemonte and Sibley are analogous art as both of them are related to augmentation of object. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Sibley by selecting, using sensor data, a target that comprises a physical element in proximity to the vehicle as the vehicle is travelling based on a rendering of an image from an aerial mesh as taught by Piemonte. The motivation for the above is to use 3d mesh for precise selection of object. Sibley modified by Piemonte teaches generating virtual content based on the sensor data, based on the image from the aerial mesh in addition to the head pose of the driver (Sibley Col 14 lines 65-Col 15 lines 4 “The system 300 can then use the sensor data to determine a motion of the vehicle 304, such as a measured acceleration, speed, orientation (e.g., direction of travel, roll, pitch, yaw, etc.), and other characteristics of the vehicle 304. Using the motion, the system 300 can procedurally generate and provide the content 402 to the first passenger 306(1)”); and delivering the virtual content for one or more users of the vehicle as an overlay over, and in an augmented manner with respect to, the physical target (Sibley Col 7 lines 42-46 “The system can then use that determination to augment the object for the passenger using the content. For instance, the system can cause the translucent display to display the content in such a way that the content looks superimposed on the object for the passenger through the translucent display”). Claim 1 is directed to a method claim and its steps are similar in scope and functions performed by the elements of system claim 12 and therefore claim 1 is also rejected with the same rationale as specified in the rejection of claim 1. Regarding claim 20 Sibley teaches A Vehicle comprising: a plurality of sensors configured to obtain sensor data for a vehicle, the sensor data comprising image sensor data, location sensor data, and inertial measurement unit (IMU) sensor data obtained from the vehicle (Col 10 lines 40-60 “The vehicle 110 can be equipped with a number and wide variety of sensors 108. These sensors 108 may enable various systems on the vehicle 110 to detect, for example, acceleration, deceleration, wheel slip, and turning of the vehicle 110. These systems can include, for example, antilock braking systems (ABS), traction control systems (TCS), and autonomous driving systems, among other things. The vehicle 110 can be equipped with cameras (video cameras, still camera, depth cameras, RGB-D cameras, RGB cameras, intensity cameras, time-of-flight cameras, thermographic cameras, or any other type of camera), radar, LiDAR, sonar (or other ultrasonic transducers), and/or other object detection or proximity sensors. The vehicle 110 can also be equipped with accelerometers, magnetometers, Inertial Measurement Units (IMUs), speed sensors, encoders, gyroscopes, and other equipment that report to a central processing unit (CPU) for the vehicle to measure acceleration, speed, orientation (e.g., direction of travel, roll, pitch, yaw, etc.), and other characteristics”); a processor that is coupled to the plurality of sensors (Fig. 8 element 820 and 832) and that is configured to at least facilitate: selecting, using the sensor data, a physical target that comprises a physical element in proximity to the vehicle as the vehicle is travelling (Col 6 lines 20-38 “In some examples, when identifying objects that are proximate to the vehicle, the system can utilize the pose estimation and/or the gaze detection to identify an object of interest for the passenger…….For another instance, the passenger may be looking through the translucent display at a specific object that is located in the environment outside of the vehicle”) based on a rendering of an image from an aerial map data in addition to a head pose of a driver of the vehicle, the head pose including a position and orientation of a head of the driver (Col 3 lines 23-25 “determine a pose of a head of the passenger, where the pose can include the three-dimensional location (e.g., (X,Y,Z) coordinates) and/or orientation of the head of the passenger within the vehicle. Col 6 lines 38-42 “ To identify the specific object, the system can utilize the map data, information about the object (e.g. its size, position relative to the vehicle, classification, etc), the pose of the passenger relative to the vehicle, and the gaze detection to determine that the passenger is focusing on the specific object”. Col 9 lines 57-60 “. In some instances, map data may be associated with topologic maps, metric maps, semantic maps, street maps, rail maps, weather maps, GPS data, satellite images, geographic information, street-view images, coordinates, points of interest”) but is silent about based on a rendering of an image from an aerial mesh; Piemonte teaches selecting, using sensor data, a target that comprises a physical element in proximity to the vehicle as the vehicle is travelling based on a rendering of an image from an aerial mesh (Fig. 4 shows visualization. “[0012] The pre-generated 3D mesh map data may be available for the entire real environment, 360° around the vehicle, behind occlusions, and beyond the horizon. Thus, in some embodiments, the 3D mesh map data may be leveraged to provide information about the environment, including objects that are not visible, to the sides and behind the vehicle. 0038]…… Elements 1010 and 1020 may, for example, be performed by a network-based 3D data system including one or more computing systems that collect images (e.g., aerial and street photography from previous collections, images captured by vehicles equipped with instances of the AR system and/or video or still cameras, images captured by personal devices such as smartphones or tablets, etc.), stereographically reconstruct and otherwise process the images to generate data including 3D mesh maps of surfaces. [0041] As indicated at 1040, the AR system may query sensor data according to the fetched, pre-generated 3D data to obtain 3D information for the local environment within the range of the sensors”). Piemonte and Sibley are analogous art as both of them are related to augmentation of object. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Sibley by selecting, using sensor data, a target that comprises a physical element in proximity to the vehicle as the vehicle is travelling based on a rendering of an image from an aerial mesh as taught by Piemonte. The motivation for the above is to use 3d mesh for precise selection of object. Sibley modified by Piemonte teaches generating a head pose and a vehicle pose, based on the image sensor data, the location sensor data, and the IMU sensor data obtained from the vehicle (Sibley Col 2 lines 57-66 “Such a system may rely on both information about a vehicle in an environment, as well as information about passengers viewing such content in the vehicle. Information about the vehicle may include pose information (i.e. a full position and orientation relative to some local or global coordinate system), as well as object information (e.g. object detections, classifications, and locations). One such method for obtaining vehicle pose is through the use of simultaneous localization and mapping (SLAM). SLAM may incorporate various sensor modalities (e.g. cameras, lidar, radar, etc.”. Col 3 lines 18-25 “The system can then analyze the sensor data using pose estimation to determine a pose of the passenger within the vehicle. For instance, the system can analyze the sensor data using one more computer-vision algorithms associated with pose estimation to determine a pose of a head of the passenger, where the pose can include the three-dimensional location (e.g., (X,Y,Z) coordinates) and/or orientation of the head of the passenger within the vehicle”); Sibley modified by Piemonte teaches generating virtual content using the head pose and the vehicle pose, based on the image from the aerial mesh in addition to head pose of the driver (Sibley Col 2 lines 19-29 “The present disclosure is related to systems and methods for providing content to at least one passenger (e.g., a driver, a passive passenger, etc.) within a passenger compartment of a moving vehicle. Such systems and methods may be based on a pose of an autonomous vehicle in an environment, location of objects displaced about such a vehicle, and pose estimation and/or gaze detection (e.g., eye tracking) of a passenger in the moving vehicle. Such content may be presented in a way so as to create a unique experience for any passengers therein”); and providing instructions for delivering the virtual content for one or more users of the vehicle as an overlay over, and in an augmented manner with respect to, the physical target (Sibley Col 2 lines 28-31 “ For example, based on information about vehicle pose, environment information, passenger information, and the like, content may be delivered in such a way as to created augmented reality experiences”); and a display device that is configured to display the virtual content overlayed on or presented in the augmented manner with respect to the physical target inside the vehicle for the one or more users in accordance with the instructions provided by the processor (Col 7 lines 42-46 “The system can then use that determination to augment the object for the passenger using the content. For instance, the system can cause the translucent display to display the content in such a way that the content looks superimposed on the object for the passenger through the translucent display”). Regarding claims 2 and 13 Sibley modified by Piemonte teaches wherein: the sensor data comprises image sensor data, location sensor data, and inertial measurement unit (IMU) sensor data obtained from the vehicle; (Sibley Col 10 lines 40-60 “The vehicle 110 can be equipped with a number and wide variety of sensors 108. These sensors 108 may enable various systems on the vehicle 110 to detect, for example, acceleration, deceleration, wheel slip, and turning of the vehicle 110. These systems can include, for example, antilock braking systems (ABS), traction control systems (TCS), and autonomous driving systems, among other things. The vehicle 110 can be equipped with cameras (video cameras, still camera, depth cameras, RGB-D cameras, RGB cameras, intensity cameras, time-of-flight cameras, thermographic cameras, or any other type of camera), radar, LiDAR, sonar (or other ultrasonic transducers), and/or other object detection or proximity sensors. The vehicle 110 can also be equipped with accelerometers, magnetometers, Inertial Measurement Units (IMUs), speed sensors, encoders, gyroscopes, and other equipment that report to a central processing unit (CPU) for the vehicle to measure acceleration, speed, orientation (e.g., direction of travel, roll, pitch, yaw, etc.), and other characteristics”); the method further includes generating, via the processor, a head pose and a vehicle pose, based on the image sensor data, the location sensor data, and the IMU sensor data obtained from the vehicle (Sibley Col 2 lines 57-66 “Such a system may rely on both information about a vehicle in an environment, as well as information about passengers viewing such content in the vehicle. Information about the vehicle may include pose information (i.e. a full position and orientation relative to some local or global coordinate system), as well as object information (e.g. object detections, classifications, and locations). One such method for obtaining vehicle pose is through the use of simultaneous localization and mapping (SLAM). SLAM may incorporate various sensor modalities (e.g. cameras, lidar, radar, etc.”. Col 3 lines 18-25 “The system can then analyze the sensor data using pose estimation to determine a pose of the passenger within the vehicle. For instance, the system can analyze the sensor data using one more computer-vision algorithms associated with pose estimation to determine a pose of a head of the passenger, where the pose can include the three-dimensional location (e.g., (X,Y,Z) coordinates) and/or orientation of the head of the passenger within the vehicle”); and the generating of the virtual content is performed using the head pose and the vehicle pose (Sibley Col 2 lines 19-29 “The present disclosure is related to systems and methods for providing content to at least one passenger (e.g., a driver, a passive passenger, etc.) within a passenger compartment of a moving vehicle. Such systems and methods may be based on a pose of an autonomous vehicle in an environment, location of objects displaced about such a vehicle, and pose estimation and/or gaze detection (e.g., eye tracking) of a passenger in the moving vehicle. Such content may be presented in a way so as to create a unique experience for any passengers therein”). Regarding claims 4 and 15 Sibley modified by Piemonte teaches wherein the virtual content is displayed for the one or more users of the vehicle on one or more display screens inside the vehicle (Sibley Col 2 lines 39-46 “In some examples, the system can provide content to the passenger within the passenger compartment using one or more surfaces within the vehicle. For instance, one or more of the windows of the vehicle may include a translucent display that is capable of allowing the passenger to view the environment outside of the vehicle, while at the same time, displaying content to the passenger such that the content looks superimposed with the environment”). Regarding claims 5 and 16 Sibley modified by Piemonte teaches wherein: the physical target comprises a billboard along a roadway in which the vehicle is travelling (Sibley Col 8 lines 57-60 “For instance, the system can determine whether the object of interest includes a billboard, sign, and/or other type of surface for providing content to passengers. The system can then cause the at least one projector to project the content on the surface”); and the virtual content is overlayed over, and presented in an augmented manner with respect to, the billboard as displayed inside the vehicle for the one or more users (Sibley Col 7 lines 42-46 “The system can then use that determination to augment the object for the passenger using the content. For instance, the system can cause the translucent display to display the content in such a way that the content looks superimposed on the object for the passenger through the translucent display”). Regarding claim 7 Sibley modified by Piemonte teaches wherein: the physical target comprises a point of interest along a roadway in which the vehicle is travelling (Sibley Col 8 lines 46-51 “In some examples, the system can augment the outside environment and/or one or more objects located in the environment by using one or more projectors to project content outside of the vehicle for the passenger. For instance, the system can identify an object located outside of the vehicle, such as an object of interest for the passenger”); and the virtual content is overlayed in an augmented manner with respect to, the point of interest (Sibley Col 7 lines 42-46 “The system can then use that determination to augment the object for the passenger using the content. For instance, the system can cause the translucent display to display the content in such a way that the content looks superimposed on the object for the passenger through the translucent display”). Claim(s) 3 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Sibley modified by Piemonte and further in view of Fueki et al. (US Patent Pub. No. 20200234497 “Fueki”). Regarding claims 3 and 14 Sibley modified by Piemonte teaches wherein the generating of the virtual content is performed further using an image database that is stored in a computer storage, in combination with the head pose and the vehicle pose (Sibley Col 24 lines 36-41 “In various examples, the memory 802 can be volatile (such as random access memory, or RAM), non-volatile (such as read only memory, or ROM, flash memory, etc.), or some combination of the two. The memory 802 can also comprise the application 804. As discussed herein, the application 804 receives sensor data, from either internal 832 or external 108 sensors, and can provide the passenger with content 816”. Col 2 lines 19-29 “The present disclosure is related to systems and methods for providing content to at least one passenger (e.g., a driver, a passive passenger, etc.) within a passenger compartment of a moving vehicle. Such systems and methods may be based on a pose of an autonomous vehicle in an environment, location of objects displaced about such a vehicle, and pose estimation and/or gaze detection (e.g., eye tracking) of a passenger in the moving vehicle. Such content may be presented in a way so as to create a unique experience for any passengers therein”) but is silent about generating of the virtual content is performed further using a three dimensional image database; Fueki teaches generating virtual content is performed further using a three dimensional image database (“[0039]….. he three-dimensional shape of the virtual moving body 100 may be stored in the virtual space construction part 250 or obtained from a 3D model database 240”). Fueki and Sibley modified by Piemonte are analogous art as both of them are related to augmentation of object. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Sibley modified by Piemonte by generating virtual content is performed further using a three dimensional image database as taught by Fueki. The motivation for the above is to enhance the applicability of Sibley by supporting higher dimensional content for better visibility. Claim(s) 6 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Sibley modified by Piemonte and further in view of Kim et al. (US Patent No. 11709069 “Kim”). Regarding claim 6 Sibley modified by Piemonte is silent about wherein: the physical target comprises a traffic control device along a roadway in which the vehicle is travelling; Kim teaches physical target comprises a traffic control device along a roadway in which the vehicle is travelling (Col 14 lines 52-60 “The three-dimensional navigation information may be information in which virtual three-dimensional graphic information for driving guidance is spatially matched to and displayed on an actual object of the real world by using the HD map information. The graphic information may be formed in the form of at least one of a text, an image (e.g., an icon), or a video. The actual object of the real world may be various terrains and topographical features, such as roads, traffic lights and signs on roads, and rivers”); Kim and Sibley modified by Piemonte are analogous art as both of them are related to object augmentation. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Sibley by having physical target that comprises a traffic control device along a roadway in which the vehicle is travelling as taught by Kim. The motivation for the above is to enhance the applicability of Sibley by supporting different types of physical item. Sibley modified by Piemonte and Kim teaches the virtual content is overlayed over, or presented in an augmented manner with respect to, the traffic control device as displayed inside the vehicle for the one or more users (Sibley Col 7 lines 42-46 “The system can then use that determination to augment the object for the passenger using the content. For instance, the system can cause the translucent display to display the content in such a way that the content looks superimposed on the object for the passenger through the translucent display”). Regarding claim 8 Sibley modified by Piemonte is silent about wherein: the physical target comprises a road condition object along a roadway in which the vehicle is travelling; Kim teaches physical target comprises a road condition object along a roadway in which the vehicle is travelling (Col 14 lines 52-60 “The three-dimensional navigation information may be information in which virtual three-dimensional graphic information for driving guidance is spatially matched to and displayed on an actual object of the real world by using the HD map information. The graphic information may be formed in the form of at least one of a text, an image (e.g., an icon), or a video. The actual object of the real world may be various terrains and topographical features, such as roads, traffic lights and signs on roads, and rivers”); Kim and Sibley modified by Piemonte are analogous art as both of them are related to object augmentation. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Sibley modified by Piemonte by having physical target that comprises a road condition object along a roadway in which the vehicle is travelling as taught by Kim. The motivation for the above is to enhance the applicability of Sibley by supporting different types of physical item. Sibley modified by Piemonte and Kim teaches the virtual content is overlayed over, or presented in an augmented manner with respect to, the road condition object, as displayed inside the vehicle for the one or more users (Sibley Col 7 lines 42-46 “The system can then use that determination to augment the object for the passenger using the content. For instance, the system can cause the translucent display to display the content in such a way that the content looks superimposed on the object for the passenger through the translucent display”). Claim(s) 9 is rejected under 35 U.S.C. 103 as being unpatentable over Sibley modified by Piemonte and further in view of Rondao Alface et al. (US Patent No. 20230298217 “Rondao”). Regarding claim 9, Sibley modified by Piemonte teaches, the delivering of the virtual content is delivered from a remote system to the vehicle; (Sibley Column 29 Lines 29-34: “The remote system 828 can then send the controller 102 data indicating the surface for providing the content 816, along with the content 816 that the vehicle is to provide. Based on receiving the data, the controller 102 can cause the content 816 to be provided at the surface.”) but doesn’t expressly teach, the remote system utilizes a level of detail of a three dimensional (3D) mesh and the level of detail is selected based on a distance between the physical target and the vehicle. However, Rondao teaches, the level of detail is selected based on a distance between a physical target and a viewing device (“[0005] In accordance with an aspect, an apparatus includes: at least one processor; …… where the level of detail increases as fewer occupied positions are subsampled, and where the level of detail is chosen depending on an operating parameter of a rendering device or viewer distance; wherein the scalability information is configured to be used with the decoder to reconstruct a mesh at different operating points to approximate a shape of the three-dimensional object.”) Sibley modified by Piemonte and Rondao are analogous as they are from the field of generating virtual image. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Sibley modified by Piemonte to have included the remote system to utilize a level of detail of a three dimensional (3D) mesh and the level of detail is selected based on a distance between the physical target and the vehicle similar to the level of detail is selected based on a distance between a physical target and a viewing device as taught by Rondao. The motivation of the modification is to control resolution of the virtual content. Claim(s) 11 is rejected under 35 U.S.C. 103 as being unpatentable over Sibley as modified by Piemonte and Rondao and further in view of SHKURKO ( US patent Publication: 20230206544, “Shkurko”). Regarding claims 11, Sibley as modified by Piemonte and Rondao fails to expressly teach, performing, via the processor, frustum and back-face culling of the 3D meshes, including by including by converting from a Cartesian coordinate system to a spherical coordinate system having a three dimensional space in which the origin is the user's position However, Shkurko teaches performing, via a processor, frustum and back-face culling of a 3D meshes, including by including by converting from a Cartesian coordinate system to a spherical coordinate system having a three dimensional space in which the origin is user's position (ABSTRACT “The frustum is transformed from a Cartesian coordinate space to a spherical coordinate space using a transform matrix that places a central ray of the frustum as the Z-axis. [0014]…… Depending on whether an intersection at a node is detected, the BVH traversal can continue, where for each leaf node representing a bounding volume for a geometric object (e.g., a triangle, a rectangle, a mesh, etc.) for which an intersection with the frustum is detected. [0018]….. Either approach may utilize a ray-object intersection test for various purposes. For example, in a ray tracing process, the path of a light “ray” is traced from a viewpoint (the “camera”) through a corresponding pixel of a two-dimensional (2D) plane (the image plane) into the three-dimensional (3D) virtual scene …….In a rasterization-based rendering process, a ray-object intersection test can be employed in, for example, various culling operations, such as view frustum culling, occlusion culling, backface culling, mesh culling, and the like”); Sibley as modified by Piemonte and Rondao and Shkurko are analogous as they are from the field of 3D image generation. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Sibley as modified by Piemonte and Rondao to have included performing, via a processor, frustum and back-face culling of a 3D meshes, including by including by converting from a Cartesian coordinate system to a spherical coordinate system having a three dimensional space in which the origin is user's position as taught by Shkurko. The motivation of the modification is to use standard method of culling in 3d mesh data for proper rendering. Claim(s) 21 is rejected under 35 U.S.C. 103 as being unpatentable over Sibley as modified by Piemonte and further in view of Cundall et al. (US Pat. Pub. No. 20220392135 “Cundall”). Regarding claim 21 , Sibley as modified by Piemonte fails to expressly teach, wherein the virtual content is displayed for the one or more users of the vehicle on one or more drop down display screens inside the vehicle. Cundall teaches content is displayed for the one or more users of the vehicle on one or more drop down display screens inside vehicle (“[102]…… visually presents search responses on display screens in the car, e.g., situated in the car's dashboard, within headrests, on a drop-down screen, or the like”); Sibley as modified by Piemonte and Cundall are analogous as they are from the field of image display. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Sibley as modified by Piemonte to have included displaying of virtual content for the one or more users of the vehicle on one or more drop down display screens inside the vehicle similar to displaying of content for the one or more users of the vehicle on one or more drop down display screens inside the vehicle as taught by Cundall. The motivation of the modification is to enhance applicability of Sibley By having different types of display. Claim(s) 22 is rejected under 35 U.S.C. 103 as being unpatentable over Sibley as modified by Piemonte and further in view of Kameyama (US Pat. Pub. No. 20100023234 “Kameyama”). Regarding claim 22, Sibley as modified by Piemonte is silent wherein the virtual content is displayed for the one or more users of the vehicle on one or more in-mirror display screens inside the vehicle. Kameyama teaches content is displayed for the one or more users of the vehicle on one or more in-mirror display screens inside the vehicle (“[0077]….. the car navigation apparatus 81d, the in-mirror display section 82, the HUD 83, the speaker 91, and the smell generator 93, as already explained contents”); Sibley as modified by Piemonte and Kameyama are analogous as they are from the field of image display. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Sibley as modified by Piemonte to have included displaying of virtual content for the one or more users of the vehicle on one or more in-mirror display screens inside the vehicle similar to displaying of content for the one or more users of the vehicle on one or more in-mirror display screens inside the vehicle as taught by Kameyama. The motivation of the modification is to enhance applicability of Sibley By having different types of display. Claim(s) 23 is rejected under 35 U.S.C. 103 as being unpatentable over Sibley as modified by Piemonte and Kim and further in view of Ng-Thow-Hing et al. (US Pat. Pub. No. 20140362195 “Ng-Thow-Hing”). Regarding claim 23 Sibley as modified by Piemonte and Kim is silent about the physical target and road condition object comprises a pothole along the roadway in which the vehicle is travelling; Ng-Thow-Hing teaches physical target and road condition object comprises a pothole along the roadway in which the vehicle is travelling (“[0084] Information related to the obstacle detection and warning function may be presented to the driver as a contact-analog augmented reality graphic element projected by the first projector 118 of the HUD device 102. In this regard, the vehicle control system 180 may detect various obstacles in the roadway on which the vehicle 106 is travelling. For example, obstacles may include pedestrians crossing the roadway, other vehicles, animals, debris in the roadway, potholes, etc”); Sibley as modified by Piemonte and Kim and Ng-Thow-Hing are analogous as they are from the field of image display. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Sibley as modified by Piemonte and Kim to have included physical target and road condition object that comprises a pothole along the roadway in which the vehicle is travelling as taught by Ng-Thow-Hing. The motivation for the above is to provide notification of a bad road condition to the user. Sibley as modified by Piemonte and Kim and Ng-Thow-Hing teaches the virtual content is overlayed over, or presented in an augmented manner with respect to, the pothole, as displayed inside the vehicle for the one or more users. (Sibley Col 7 lines 42-46 “The system can then use that determination to augment the object for the passenger using the content. For instance, the system can cause the translucent display to display the content in such a way that the content looks superimposed on the object for the passenger through the translucent display”). Allowable Subject Matter Claim 24 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims as prior art of records doesn’t teach the claim limitations as a whole. Response to Arguments Applicant’s arguments, see remarks filed 12/05/2025, with respect to rejection of claims 1, 12 and 20 have been fully considered and are persuasive. The rejection has been withdrawn. However upon further consideration a new ground of rejection is made under 35 U.S.C. 103 as being unpatentable over Sibley (US Patent No.10242457 “Sibley”) in view of Piemonte et al. (US Pat. Pub. No. 20210166490 “Piemonte”). Applicant’s arguments, see remarks filed 12/05/2025, pages 10-11 with respect to rejection of claim 11 have been fully considered and are persuasive. The rejection has been withdrawn. However upon further consideration a new ground of rejection is made under 35 U.S.C. 103 as being unpatentable over Sibley as modified by Piemonte and Rondao and further in view of SHKURKO ( US patent Publication: 20230206544, “Shkurko”). Applicant argues see remarks pages 11-12, ”By way of example……. wherein the virtual content is displayed for the one or more users of the vehicle on one or more display screens inside the vehicle. For example, as noted above, Applicant respectfully notes that in the Sibley referenced cited in the Office Action (at p. 9) with respect to Applicant's original Claim 15, the display is provided via a window (Sibley, at Col. 2, lines 39-46) rather than via a display screen as recited in Applicant's amended Claim 15”. Examiner wants to note that when content is displayed on a window the window becomes display screen. See Sibley Col 11 Lines 46-52 “. For instance, the one or more display surfaces 214 can display content to the passengers within the vehicle 206. In some examples, displaying the content can include augmenting the environment outside of the vehicle 206 with content. For instance, the one or more display surfaces 214 can augment one or more objects located in the environment outside of the vehicle 206 with the content”. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAPTARSHI MAZUMDER whose telephone number is (571)270-3454. The examiner can normally be reached 8 am-4 pm PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at (571)272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SAPTARSHI MAZUMDER/Primary Examiner, Art Unit 2612
Read full office action

Prosecution Timeline

Mar 07, 2024
Application Filed
Sep 02, 2025
Non-Final Rejection — §103, §112
Nov 26, 2025
Examiner Interview Summary
Nov 26, 2025
Applicant Interview (Telephonic)
Dec 05, 2025
Response Filed
Mar 02, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597211
GENERATING VARIANTS OF VIRTUAL OBJECTS BASED ON ADJUSTABLE EXTERNAL FACTORS
2y 5m to grant Granted Apr 07, 2026
Patent 12586316
METHOD FOR MIRRORING 3D OBJECTS TO LIGHT FIELD DISPLAYS
2y 5m to grant Granted Mar 24, 2026
Patent 12582488
USER INTERFACE FOR CONNECTING MODEL STRUCTURES AND ASSOCIATED SYSTEMS AND METHODS
2y 5m to grant Granted Mar 24, 2026
Patent 12579745
Curvature-Guided Inter-Patch 3D Inpainting for Dynamic Mesh Coding
2y 5m to grant Granted Mar 17, 2026
Patent 12567210
Multipath Artifact Avoidance in Mobile Dimensioning
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
76%
With Interview (+11.8%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 375 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month