Prosecution Insights
Last updated: April 19, 2026
Application No. 18/596,838

SYSTEMS AND METHODS FOR 3D SCENE AUGMENTATION AND RECONSTRUCTION

Non-Final OA §102§103
Filed
Mar 06, 2024
Examiner
BROUGHTON, KATHLEEN M
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Resonai Inc.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
92%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
219 granted / 263 resolved
+21.3% vs TC avg
Moderate +8% lift
Without
With
+8.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
34 currently pending
Career history
297
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
51.2%
+11.2% vs TC avg
§102
24.1%
-15.9% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 263 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment A Preliminary Amendment was made 03/06/2024 to amend the specification and cancel claims 21-140. Original filed claims 1-20 are pending and examined below. Information Disclosure Statement The information disclosure statement (IDS) submitted on March is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is considered by examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-6, 8-10, 12-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Katz et al (US 2018/0082123). Regarding Claim 1, Katz et al teach a computer-implemented visual input reconstruction system for enabling selective insertion of content into preexisting media content frames (computing system 30 that implements method 1100 for an exchange to dynamically control dynamic virtual signage at a physical location, such as a sporting venue; Fig 3, 8, 11 and ¶ [0037], [0102], [0126]-[0127]), the system comprising: at least one processor (processing unit 306; Fig 3 and ¶ [0038]) configured to: access a memory storing a plurality of object image identifiers associated with a plurality of objects (the processor 306 can access memory 314, including access to an image/video data store 322 and classifier data store 330 to identify image objects; Fig 3 and ¶ [0040]); transmit, to one or more client devices, at least one object image identifier of the plurality of object image identifiers (the network interface 308 allows to transmit 342 over the network 336 to 340 a client computing system 303, including (sponsored object data; Fig 4 and ¶ [0043]-[0044]); Fig 3, 11 and ¶ [0041], [0126]-[0129], [0147]); receive, from the one or more client devices, one or more bids associated with the at least one object image identifier (the client computing system 303 may transmit a bid for advertisement space; Fig 3, 11 ¶ [0060]-[0062], [0130], [0147]); determine a winning bid from among the received one or more bids, the winning bid being associated with a winning client device from among the one or more client devices (the exchange may automatically select winning bidders for sponsorship data if the given rules are satisfied from among the computing sponsor devices; Fig 11 and ¶ [0148]-[0149]); receive winner image data from the winning client device (the given signage is presented at the physical location corresponding to the winning sponsorship data for the given time segment; Fig 11 and ¶ [0150]); store the winner image data in the memory (historical data is stored in the standings data store 334 (1202); Fig 3, 12 and ¶ [0040], [0151]); identify, in at least one preexisting media content frame, an object insertion location for an object corresponding to the at least one object image identifier (advertisement location 804, 814 are determined for the advertisements 802, 812 within or in a layer on top of the video stream data; Fig 8 and ¶ [0101]-[0104]); generate at least one processed media content frame by processing the at least one preexisting media content frame to insert at least a rendition of the winner image data at the object insertion location (the winning detected sponsor logo is incorporated with the video frame to create an augmented video frame with the advertisement, identified from sample frames of interest from the input media (¶ [0093]); Fig 8 and ¶ [0101]-[0104]); and transmit the at least one processed media content frame to one or more user devices (the advertisement sponsor is then depicted in subsequent frames and the computing system may provide a platform whereby options are presented for the user to switch with similar displayed information; Fig 8 and ¶ [0103]-[0104]). Regarding Claim 2, Katz et al teach the computer-implemented visual input reconstruction system of claim 1 (as described above), wherein the at least one object image identifier comprises at least one of a shape, a descriptor of a shape, a product, or a descriptor of a product (the object identifier is a space in which an advertisement may be overlaid on it in real time for advertisement purposes; Fig 8 and ¶ [0040], [0061], [0098]-[0099]). Regarding Claim 3, Katz et al teach the computer-implemented visual input reconstruction system of claim 1 (as described above), wherein the preexisting media content frames include at least one of a still image, a series of video frames, a series of virtual three dimensional content frames, or a hologram (advertisement location 804, 814 are determined for the advertisements 802, 812 within or in a layer on top of the video stream data (series of video frames) and is based on best determined locations based on a selected sample frame from the input media (¶ [0093]); Fig 8 and ¶ [0095]-[0099], [0101]-[0104]). Regarding Claim 4, Katz et al teach the computer-implemented visual input reconstruction system of claim 1 (as described above), wherein the at least one processor (processing unit 306; Fig 3 and ¶ [0038]) is further configured to perform image processing on the winner image data to render the winner image data compatible with a format of the preexisting media content frame (the advertisements 802, 812 are presented in the boxes 820, 822 of the frame and may be presented (image processed) with a layer of transparency or shading (virtual augmented video display based on a selected sample frame from the input media (¶ [0093]); Fig 8 and ¶ [0103]-[0104]). Regarding Claim 5, Katz et al teach the computer-implemented visual input reconstruction system of claim 1 (as described above), wherein the at least one preexisting media content frame includes a plurality of frames constituting a virtual reality field of view (advertisement location 804, 814 are determined for the advertisements 802, 812 within or in a layer on top of the video stream data (series of video frames based on a selected sample frame from the input media (¶ [0093]), thereby generating an augmented virtual image; Fig 8 and ¶ [0095]-[0099], [0101]-[0104]), and wherein the inserting renders an object from the winning image data within the plurality of frames (the winning bidder (¶ [0148]) advertisements 802, 812 are presented in the boxes 820, 822 of the frames as virtual augmented video; Fig 8 and ¶ [0103]-[0104]). Regarding Claim 6, Katz et al teach the computer-implemented visual input reconstruction system of claim 1 (as described above), wherein transmitting includes transmission over a network (the network interface 308 allows to transmit 342 over the network 336 to 340 a client computing system 303, including (sponsored object data; Fig 4 and ¶ [0043]-[0044]); Fig 3, 11 and ¶ [0041], [0126]-[0129], [0147]). Regarding Claim 8, Katz et al teach the computer-implemented visual input reconstruction system of claim 1 (as described above), wherein the winner image data is inserted into the at least one preexisting media content frame such that the winner image data is overlaid on preexisting content in the at least one preexisting media content frame the advertisements 802, 812 are presented in the boxes 820, 822 of the frame and may be presented (image processed) with an overlay layer of transparency or shading (virtual augmented video display), and may be based on a selected sample frame from the input media (¶ [0093]); Fig 8 and ¶ [0103]-[0104]). Regarding Claim 9, Katz et al teach the computer-implemented visual input reconstruction system of claim 1 (as described above), wherein the winner image data is inserted into the at least one preexisting media content frame such that an object of the winner image data replaces preexisting content in the at least one preexisting media content frame (the dynamic signage is virtually presented such that broadcasting or streaming viewers can see the sponsorship data for the acquired time segment based on a selected sample frame from the input media (¶ [0093]); Fig 11 and ¶ [0150]). Regarding Claim 10, Katz et al teach the computer-implemented visual input reconstruction system of claim 1 (as described above), wherein the winner image data is inserted on a portion of the object corresponding to the at least one object image identifier (the exchange may automatically select winning bidders and may cause presentation of sponsorship data associated with the winner based on the given rules; ¶ [0148]). Regarding Claim 12, Katz et al teach the computer-implemented visual input reconstruction system of claim 1 (as described above), wherein the object corresponding to the at least one object image identifier includes at least one of a wall, a billboard, a picture frame, or a window (the advertisements 802, 812 are presented in the boxes 820, 822 of the frame and may be presented (image processed) with a level of transparency or shading (virtual augmented video display) on a good location for advertisement exposure in the image frame (such as a wall of venue or hood of race car); Fig 8 and ¶ [0098]-[0099], [0102]-[0104]). Regarding Claim 13, Katz et al teach the computer-implemented visual input reconstruction system of claim 1 (as described above), wherein the winner image data displayed in the preexisting media content frame changes after a predetermined period of time (the sponsor advertisement may appear to a user in a stream video in real time or near real time for a given period of specified time according to the bidding rules with the location based on a selected sample frame from the input media (¶ [0093]); Fig 8 and ¶ [0101]-[0104]). Regarding Claim 14, Katz et al teach the computer-implemented visual input reconstruction system of claim 1 (as described above), wherein the processor (processing unit 306; Fig 3 and ¶ [0038]) is further configured to obtain the at least one preexisting media content frame in real-time and to insert the rendition of the winner image data in the at least one preexisting media content frame in real-time (the dynamic signage is virtually presented such that broadcasting or streaming viewers can see the sponsorship data for the acquired time segment based on a selected sample frame from the input media (¶ [0093]) and can be performed in real-time; Fig 11 and ¶ [0101], [0104], [0147]-[0148]). Regarding Claim 15, Katz et al teach a computer-implemented method for enabling selective insertion of content into preexisting media content frames (method 1100 for an exchange to dynamically control dynamic virtual signage at a physical location, such as a sporting venue using computing system 30; Fig 3, 8, 11 and ¶ [0037], [0102], [0126]-[0127]), the method comprising: steps identical to claim 1 (as discussed above). Regarding Claim 16, Katz et al teach the method of claim 15 (as discussed above), wherein limitations are identical to claim 2 (as discussed above). Regarding Claim 17, Katz et al teach the method of claim 15 (as discussed above), wherein limitations are identical to claim 3 (as discussed above). Regarding Claim 18, Katz et al teach the method of claim 15 (as discussed above), wherein limitations are identical to claim 4 (as discussed above). Regarding Claim 19, Katz et al teach the method of claim 15 (as discussed above), wherein limitations are identical to claim 5 (as discussed above). Regarding Claim 20, Katz et al teach a non-transitory computer readable medium including instructions that, when executed by at least one processor, cause the at least one processor to execute operations (processor 306 can access memory 314 to implement method 1100 for an exchange to dynamically control dynamic virtual signage at a physical location, such as a sporting venue; Fig 3, 11 and ¶[0037]-[0040], [0126]-[0127]) identical to claim 1 (as discussed above). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 7, 11 are rejected under 35 U.S.C. 103 as being unpatentable over Katz et al (US 2018/0082123) in view of Huang (US 2013/0311303). Regarding Claim 7, Katz et al teach the computer-implemented visual input reconstruction system of claim 1 (as described above), including the processor (processing unit 306; Fig 3 and ¶ [0038]), wherein transmitting includes transmitting the processed media content frame to a first user device of the one or more user devices (the sponsorship data may be virtually presented to streaming viewer (and not present to the physical attendees); Fig 11 and ¶ [0150]). Katz et al does not teach to transmit the at least one preexisting media content frame to a second user device in a manner excluding the winner image data. Huang is analogous art pertinent to the technological problem addressed in the pending application and teaches to transmit the at least one preexisting media content frame to a second user device in a manner excluding the winner image data (advertisements may be bid and selected for specific users, such as the user profile, demographics or geographic location, and the advertisements to one user may be different (or not at all) for a different user; Fig 3A and ¶ [0079]-[0081]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the pending application to combine the teachings of Katz et al with Huang including to transmit the at least one preexisting media content frame to a second user device in a manner excluding the winner image data. By providing numerous details regarding the advertisement, a bidder may determine all advertisement details to optimize the marketing, thereby improving the bidder and user experience in the advertisement, as recognized by Huang (¶ [0079]-[0080]). Regarding Claim 11, Katz et al teach the computer-implemented visual input reconstruction system of claim 1 (as described above), including the processor (processing unit 306; Fig 3 and ¶ [0038]). Katz et al does not teach to receive instructions from the winning client device, the instructions comprising size restrictions for the object corresponding to the at least one object image identifier, and wherein inserting at least a rendition of the winner image data is based on the instructions. Huang is analogous art pertinent to the technological problem addressed in the pending application and teaches to receive instructions from the winning client device (bidders may place bids on specific advertisement areas; ¶ [0077]), the instructions comprising size restrictions for the object corresponding to the at least one object image identifier (the advertisement may be limited to the bid the specifications (advertisement location, size, interaction, viewers, time displayed and when, interactive or animated advertisement) and advertisement may include different sizes, angles, distances, images; Fig 10A, 10B and ¶ [0076]-[0081], [0106]-[0107]), and wherein inserting at least a rendition of the winner image data is based on the instructions (the advertisement is based on the winning bid advertisement specifications Fig 10A, 10B and ¶ [0076]-[0081], [0106]-[0107]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the pending application to combine the teachings of Katz et al with Huang including to receive instructions from the winning client device, the instructions comprising size restrictions for the object corresponding to the at least one object image identifier, and wherein inserting at least a rendition of the winner image data is based on the instructions. By providing numerous details regarding the advertisement, a bidder may determine all advertisement details to optimize the marketing, thereby improving the bidder and user experience in the advertisement, as recognized by Huang (¶ [0060]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Travez et al (US 2008/0189215) teach a system and method for advertisement built into media presentations, including bidding to determine the advertisement to be placed. Dasdan et al (US 2009/0248534) teach a system and method for offering an auction for online advertisements including a bidding process for advertisements to be displayed on web query results. Yang et al (US 2012/0232988) teach a method and system for generating and tracking dynamic advertisements within a video game. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHLEEN M BROUGHTON whose telephone number is (571)270-7380. The examiner can normally be reached Monday-Friday 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KATHLEEN M BROUGHTON/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Mar 06, 2024
Application Filed
Jan 09, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602915
FEATURE FUSION FOR NEAR FIELD AND FAR FIELD IMAGES FOR VEHICLE APPLICATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597233
SYSTEM AND METHOD FOR TRAINING A MACHINE LEARNING MODEL
2y 5m to grant Granted Apr 07, 2026
Patent 12586203
IMAGE CUTTING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12567227
METHOD AND SYSTEM FOR UNSUPERVISED DEEP REPRESENTATION LEARNING BASED ON IMAGE TRANSLATION
2y 5m to grant Granted Mar 03, 2026
Patent 12565240
METHOD AND SYSTEM FOR GRAPH NEURAL NETWORK BASED PEDESTRIAN ACTION PREDICTION IN AUTONOMOUS DRIVING SYSTEMS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
92%
With Interview (+8.3%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 263 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month