Prosecution Insights
Last updated: April 19, 2026
Application No. 18/549,556

BILLBOARD SIMULATION AND ASSESSMENT SYSTEM

Final Rejection §102
Filed
Sep 07, 2023
Examiner
NGUYEN, PHONG X
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Drive Your Art LLC
OA Round
2 (Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
297 granted / 397 resolved
+12.8% vs TC avg
Strong +25% interview lift
Without
With
+25.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
12 currently pending
Career history
409
Total Applications
across all art units

Statute-Specific Performance

§101
8.4%
-31.6% vs TC avg
§103
52.8%
+12.8% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 397 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant’s amendment filed 11/25/2025 has been entered. Claims 1-20 remain pending in the present application. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 5, 7, 11, 15 and 17 have been considered but are moot in view of the new ground rejection, necessitated by the present amendment. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Varshney et al. (Pub. No. 2020/0219323). Regarding claim 1, Varshney discloses a system comprising: a memory and a display (The user/client device 242, 244 or 246 in Fig. 2 inherently has a memory and a display. See the abstract as well); at least one database (Database(s) 235 in Fig. 2); at least one processor communicatively coupled to the memory, the display, and the at least one database, the processor configured to execute computer readable instructions stored in the memory to perform operations (Abstract: “An end-user system in accordance with the present disclosure includes a communication device configured to communicate with a server, a display screen, one or more processors, and at least one memory storing instructions which, when executed by the processor(s), cause the end-user system to access a physical world geographical location from a user, access two-dimensional physical world map data for a region surrounding the physical world geographical location, render for display on the display screen a three-dimensional mirrored world portion based on the two-dimensional physical world map data”) comprising: accessing graphical data and at least one advertisement from the at least one database (Par. 53: “The main operation of the Geollery servers 230 is to process information from the map data sources 210 and the social media sources 220 into a form that is usable by the client devices 242-246 to generate a mirrored world with social media components in real time”, and par. 51: “Social media posts can include, without limitation, images, text, video, audio, animations, and/or static or dynamic 3D meshes, among other things. In various embodiments, the geotagged social media can be presented in various forms, such as balloons, billboards, framed photos, and/or virtual gifts, among others”. Note that a social media post could comprise a billboard displaying an advertisement, as illustrated in Fig. 13 and described in par. 90); setting a perceived traveling speed based on a user's vantage point (Par. 88: “After selecting an avatar, users can use a keyboard or a panning gesture on a mobile device to virtually walk in the mirrored world”. Varshney further discloses that the mirrored world could be rendered in the first-person perspective; see par. 51: “offer a third-person or first-person walking experience in immersive virtual environments 160 as shown in FIG. 1”. In particular, in the first-person view, the "vantage point" is the perspective of the avatar’s eyes. Furthermore, a keystroke (e.g. pressing one of the arrow keys on a keyboard) or a panning gesture (e.g. a left or right swipe on a touchscreen) requires a determined, or default, speed of movement to change the scene from a first location to a second location. When the user uses a keystroke or a panning gesture in the first-person view to navigate a virtual environment, the speed at which new material appears on the screen (the "perceived traveling speed") is inherently linked to the speed associated with the keystroke or the panning motion (the user's controlled vantage point). Therefore, it is a necessary, inherent result that the "perceived traveling speed" is determined by the "keystroke or panning gesture" defining the "vantage point"); generating on the display a three-dimensional (3D) moving representation of an environment from a moving perspective of a user (Par. 62: “As the user virtually walks on the street in the mirrored world, the Geollery server(s) stream additional data to the client device for rendering further portions of the mirrored world”), the three-dimensional (3D) moving representation of an environment comprising a billboard at a specific location displaying the at least one advertisement such that visualization of the billboard corresponds to the user's vantage point (Fig. 13 illustrates a billboard affixed to a façade of a building, the billboard promoting some new arrivals at a retail store. Note that since Varshney discloses that the mirrored world could be rendered in the first-person perspective, it implies that visualization of objects in the mirrored world corresponds to a user’s vantage point (the eyes of the user’s avatar)); and updating the 3D moving representation of the environment and the billboard in response to changes in the user's vantage point in real time (Par. 62: “As the user virtually walks on the street in the mirrored world, the Geollery server(s) stream additional data to the client device for rendering further portions of the mirrored world”, par. 85: “the disclosed technology achieves six degrees of freedom in movement. To achieve such movement, the disclosed technology progressively streams data from a map data source, such as OpenStreetMap, to build 3D meshes in real time”, and par. 90: “Billboards can be used to show thumbnails of geotagged images or text. In various embodiments, different levels of detail for thumbnails can be available, such as 642, 1282, 2562, and 5122 pixels, and progressively load higher resolution thumbnails as users approach different billboards”). Regarding claim 2, Varshney discloses the system of claim 1, wherein the at least one advertisement includes at least one of alphabets, numbers, words, symbols, images, graphics and videos (As shown in Fig. 13 of Varshney, the advertisement includes an image). Regarding claim 3, Varshney discloses the system of claim 1, wherein the environment comprises at least one of virtual image mode and actual image mode (In Varshney, the mirrored world is in a virtual image mode). Regarding claim 4, Varshney discloses the system of claim 1, wherein the environment includes elements stored in or uploaded to the at least one database (Par. 53: “Data received from the map data sources 210 and the social media sources 220 are stored in one or more databases 235”, and par. 89: “the Geollery servers allow users to create billboards, balloons, and/or gift boxes at their avatar's location by uploading photos or text messages”). Regarding claim 5, Varshney discloses the system of claim 1, wherein the processor is further configured to perform operations comprising creating at least one of dynamic video format and static image format of the 3D moving representation of the environment and the billboard (Par. 70: “Regarding the vertex shading block 620, to achieve interactive frame rates, the disclosed technology can use a custom vertex shader running on the GPU to determine the exact positions of the sphere vertices to create a convincing geometry”. Since motion of the 3D mirrored world in Varshney is perceived by rendering a sequence of image frames at interactive frame rates, it could be thought of as being in a dynamic video format. Par. 51 further discloses that a billboard can display content in video format: “Social media posts can include, without limitation, images, text, video, audio, animations, and/or static or dynamic 3D meshes, among other things. In various embodiments, the geotagged social media can be presented in various forms, such as balloons, billboards”). Regarding claim 6, Varshney discloses the system of claim 1, wherein the generating step further comprises modifying parameters of the billboard or modifying the at least one advertisement (Par. 90: “Billboards can be used to show thumbnails of geotagged images or text. In various embodiments, different levels of detail for thumbnails can be available, such as 642, 1282, 2562, and 5122 pixels, and progressively load higher resolution thumbnails as users approach different billboards. When users hover over a billboard, the billboard can reveal associated text captions. In various embodiments, the text captions can be truncated to a particular number of lines, such as four lines. When users click on a billboard, a window can appear with detail including the complete text caption, the number of likes, and/or any user comments. An example of a billboard is shown in FIG. 13”). Regarding claim 7, Varshney discloses the system of claim 1, wherein the generating step further comprises altering sensory stimuli of the environment (Par. 50: “3D geometries are extruded in the mirrored world from the 2D polygons in the ground plane, and then shaded with the appropriate lighting and shadows to form buildings in the mirrored world 120. Then, the process renders a mirrored world within a frontal or a 360-degree field of view in real time and adds virtual avatars, clouds, trees, and/or day/night effects, among other things”, and par. 86: “real-world phenomena such as day and night transitions and changing seasons can be implemented to make the mirrored worlds more realistic. Persons skilled in the art will recognize the ways to implement day/night transitions which adjust the lighting and sky based on the local time of the physical world geographical location corresponding to the avatar's mirrored world position”). Regarding claim 8, Varshney discloses the system of claim 7, wherein the updating step includes updating the 3D moving representation of the environment in response to the altering sensory stimuli of the environment (Par. 86: “real-world phenomena such as day and night transitions and changing seasons can be implemented to make the mirrored worlds more realistic”). Regarding claim 9, Varshney discloses the system of claim 8, wherein the processor is further configured to perform operations comprising creating at least one of dynamic video format and static image format of the 3D moving representation of the environment and the billboard (As explained in the rejection of claim 5 above, motion of the 3D mirrored world in Varshney is perceived by rendering a sequence of image frames at interactive frame rates. This could be viewed as a dynamic video format). Regarding claim 10, Varshney discloses the system of claim 9, wherein the at least one of dynamic video format and static image format can be displayed on portable electronic devices including AR/VR headsets (Par. 53: “A user device can be a workstation 242, such as a laptop or desktop computer, a mobile device 244, such as a tablet or smartphone, and/or a headset device 246”, and par. 97: “the Geollery servers were hosted on Amazon Web Services and the test was run on mobile phones, workstations, and head-mounted displays”). Claims 11-20 recite similar limitations as respective claims 1-10, but are directed to a corresponding method. Since Varshney also discloses such a method (See par. 16), these claims could be rejected under the same rationales set forth in the rejection of their respective claims. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHONG X NGUYEN whose telephone number is (571)270-1591. The examiner can normally be reached Mon-Fri 8am - 5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHONG X NGUYEN/ Primary Patent Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Sep 07, 2023
Application Filed
Sep 02, 2025
Non-Final Rejection — §102
Nov 12, 2025
Examiner Interview Summary
Nov 12, 2025
Applicant Interview (Telephonic)
Nov 25, 2025
Response Filed
Jan 30, 2026
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585691
PERSONALIZED PRESENTATION CONTENT CONSUMPTION IN A VIRTUAL REALITY (VR) ENVIRONMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579730
Ray-Box Intersection Circuitry
2y 5m to grant Granted Mar 17, 2026
Patent 12569299
METHOD FOR GENERATING SURGICAL SIMULATION INFORMATION AND PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12573136
SCENE TRACKS FOR REPRESENTING MEDIA ASSETS
2y 5m to grant Granted Mar 10, 2026
Patent 12560998
DISPLAY DIMMING CONTROL APPARATUS, DISPLAY DIMMING CONTROL METHOD, RECORDING MEDIUM, AND DISPLAY DIMMING SYSTEM
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+25.3%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 397 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month