Prosecution Insights
Last updated: April 19, 2026
Application No. 18/429,137

FUSING NEURAL RADIANCE FIELDS BY REGISTRATION AND BLENDING

Final Rejection §103
Filed
Jan 31, 2024
Examiner
LE, MICHAEL
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Toyota Technological Institute AT Chicago
OA Round
2 (Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
88%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
568 granted / 864 resolved
+3.7% vs TC avg
Strong +22% interview lift
Without
With
+22.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
61 currently pending
Career history
925
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
52.7%
+12.7% vs TC avg
§102
13.4%
-26.6% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 864 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment 2. Applicant’s amendments filed on 01/20/2026 have been entered. Claims 1, 2, 6, 7, and 21 have been amended. Claims 8, 16 and 20 have been canceled. Claims 1-7, 9-15, and 17-19 are pending in this application, with claims 1, 9 and 17 being independent. Response to Arguments 3. Applicant's arguments filed on 01/20/2026, with respect to the 103 rejection have been fully considered but are moot in view of the new grounds of rejection. Examiner notes that independent claims 1, 9 and 17 have been amended to include new limitation. Examiner finds these limitations to be unpatentable as can be found in below detail action. In light of the current Office Action, the Examiner respectfully submits that independent claims 1, 9 and 17 are rejected in view of newly discovered reference(s) to Tremblay et al. (US-2024/0123620-A1) with the Provisional application No. 63/411,486, filed on Sep. 29, 2022. Examiner notes that independent claims 1, 9 and 17 have been amended to include new limitation. Examiner finds these limitations to be unpatentable as can be found in above detail action. On page 6 of Applicant's Remarks, the Applicant argues that the dependent claims are not taught by the prior art, insomuch as they depend from claims that are not taught by the prior art. Examiner respectfully disagrees with these arguments, for the reasons discussed below. Claim Rejections - 35 USC § 103 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 5. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over “Block-NeRF: Scalable Large Scene Neural View Synthesis” by Tancik et al., (“Tancik”) in view of Xu et al., (“Xu”) [US-2024/0267695-A1] with the Provisional application No. 63/443,258, filed on Feb. 03, 2023, further in view of Tremblay et al., (“Tremblay”) [US-2024/0123620-A1] with the Provisional application No. 63/411,486, filed on Sep. 29, 2022. Regarding claim 1, Tancik discloses a method for fusing neural radiance fields (NeRFs) (Tancik - Abstract, at least discloses We present Block-NeRF, a variant of Neural Radiance Fields that can represent large-scale environments. Specifically, we demonstrate that when scaling NeRF to render city-scale scenes spanning multiple blocks, it is vital to decompose the scene into individually trained NeRFs), the method comprising: re-rendering a first NeRF and a second NeRE at viewpoint(Tancik- Figure 2 shows The scene is split into multiple Block-NeRFs [first NeRF and a second NeRE] that are each trained on data within some radius (dotted orange line) of a specific Block-NeRF origin coordinate (orange dot). To render a target view in the scene [synthesized images], the visibility maps are computed for all of the NeRFs within a given radius. Block-NeRFs with low visibility are discarded (bottom Block-NeRF) and the color output is rendered for the remaining blocks. The renderings are then merged based on each block origin’s distance to the target view [form synthesized images from the first NeRF and the second NeRF]; page 8248, section 1. Introduction, at least discloses Recent advancements in neural rendering such as Neural Radiance Fields [40] have enabled photo-realistic reconstruction and novel view synthesis given a set of posed camera images; page 8249, right column, section 2.2. Novel View Synthesis, at least discloses Given a set of input images of a given scene and their camera poses, novel view synthesis seeks to render observed scene content from previously unobserved viewpoints, allowing a user to navigate through a recreated environment with high visual fidelity; page 8251, section 4. Method, at least discloses We dynamically select relevant Block-NeRFs for rendering, which are then composited in a smooth manner when traversing the scene [re-rendering a first NeRF and a second NeRE at different viewpoints to form synthesized images]; page 8252, section 4.2.5 Visibility Prediction, at least discloses When merging multiple Block-NeRFs, it can be useful to know whether a specific region of space was visible to a given NeRF during training […] This proves useful when merging multiple NeRFs, since it can help to determine whether a specific NeRF is likely to produce meaningful outputs for a given location; page 8252, section 4.3.1 Block-NeRF Selection, at least discloses The environment can be composed of an arbitrary number of Block-NeRFs. For efficiency, we utilize two filtering mechanisms to only render relevant blocks for the given target viewpoint. We only consider Block-NeRFs that are within a set radius of the target viewpoint. Additionally, for each of these candidates, we compute the associated visibility […] After filtering, there are typically one to three Block-NeRFs left to merge [form synthesized images from the first NeRF and the second NeRF]); inferring a transformation between a re-rendered first NeRF and a re-rendered second NeRE based on the synthesized images from the first NeRF and the second NeRF (Tancik- page 8252, section 4.2.5 Visibility Prediction, at least discloses When merging multiple Block-NeRFs [the synthesized images], it can be useful to know whether a specific region of space was visible to a given NeRF during training […] This proves useful when merging multiple NeRFs, since it can help to determine whether a specific NeRF is likely to produce meaningful outputs for a given location; page 8252, section 4.3.1 Block-NeRF Selection, at least discloses The environment can be composed of an arbitrary number of Block-NeRFs. […] We only consider Block-NeRFs that are within a set radius of the target viewpoint. Additionally, for each of these candidates, we compute the associated visibility […] After filtering, there are typically one to three Block-NeRFs left to merge [the synthesized images from the first NeRF and the second NeRF]; page 8253, section 4.3.2 Block-NeRF Compositing, at least discloses We render color images from each of the filtered Block-NeRFs and interpolate between them using inverse distance weighting between the camera origin c and the centers xi of each Block-NeRF. Specifically, we calculate the respective weights as wi / distance(c, xi)−p, where p influences the rate of blending between Block-NeRF renders. The interpolation is done in 2D image space and produces smooth transitions between Block-NeRFs [a transformation between a re-rendered first NeRF and a re-rendered second NeRE]; page 8253, section 4.3.3 Appearance Matching, at least discloses We can control the appearance of our learned models by an appearance latent code after the Block-NeRF has been trained. These codes are randomly initialized during training and the same code therefore typically leads to different appearances when fed into different Block-NeRFs […] Given a target appearance in one of the Block-NeRFs, we match its appearance in the remaining blocks. To this end, we first select a 3D matching location between pairs of adjacent Block-NeRFs. The visibility prediction at this location should be high for both Block-NeRFs. Given the matching location, we freeze the Block-NeRF network weights and only optimize the appearance code of the target in order to reduce the l2 loss between the respective area renders […] Figure 6 shows an example optimization, where appearance matching turns a daytime scene into nighttime to match the adjacent Block-NeRF); blending the re-rendered first NeRF and the re-rendered second NeRF based on the inferred transformation to fuse the first NeRF and the second NeRF (Tancik- Figure 2 shows The scene is split into multiple Block-NeRFs [first NeRF and a second NeRE] that are each trained on data within some radius (dotted orange line) of a specific Block-NeRF origin coordinate (orange dot). The renderings are then merged [blending] based on each block origin’s distance to the target view (suggests merging NeRFs from different coordinate systems to fuse the first NeRF and the second NeRF); page 8249, left column, last paragraph, at least discloses we propose dividing up large environments into individually trained Block-NeRFs, which are then rendered and combined dynamically at inference time […] To compute a target view, only a subset of the Block-NeRFs are rendered and then composited [blending the re-rendered first NeRF and the re-rendered second NeRF] based on their geographic location compared to the camera; page 8252, section 4.3.1 Block-NeRF Selection, at least discloses The environment can be composed of an arbitrary number of Block-NeRFs […] We only consider Block-NeRFs that are within a set radius of the target viewpoint. Additionally, for each of these candidates, we compute the associated visibility […] After filtering, there are typically one to three Block-NeRFs left to merge [blending]; page 8253, section 4.3.2 Block-NeRF Compositing, at least discloses We render color images from each of the filtered Block-NeRFs and interpolate between them using inverse distance weighting between the camera origin c and the centers xi of each Block-NeRF. Specifically, we calculate the respective weights as wi / distance(c, xi)−p, where p influences the rate of blending between Block-NeRF renders. The interpolation is done in 2D image space and produces smooth transitions between Block-NeRFs [transformation]; page 8253, section 5.1. Datasets, at least discloses We calculate the corresponding camera ray origins and directions in a common coordinate system, accounting for the rolling shutter of the cameras); and fusing NeRFs by registration and blending from images captured by a sensor processor (Tancik- Figure 2 shows The scene is split into multiple Block-NeRFs [first NeRF and a second NeRE] that are each trained on data within some radius (dotted orange line) of a specific Block-NeRF origin coordinate (orange dot). The renderings are then merged [blending] based on each block origin’s distance to the target view (suggests blending NeRFs from different coordinate systems for fusing NeRFs); page 8249, left column, last paragraph, at least discloses we propose dividing up large environments into individually trained Block-NeRFs, which are then rendered and combined dynamically at inference time […] To compute a target view, only a subset of the Block-NeRFs are rendered and then composited [registration and blending] based on their geographic location compared to the camera; page 8249, right column, 2nd paragraph, at least discloses performing odometry using various sensors on the vehicle as the images are collected; page 8249, right column, section 2.2. Novel View Synthesis, at least discloses Given a set of input images of a given scene and their camera poses, novel view synthesis seeks to render observed scene content from previously unobserved viewpoints, allowing a user to navigate through a recreated environment with high visual fidelity [registration]); Xu- Fig. 11 and ¶0067, at least disclose augmented-reality system 1100 may include one or more sensors, such as sensor 1140. Sensor 1140 may generate measurement signals in response to motion of augmented-reality system 1100 and may be located on substantially any portion of frame 1110. Sensor 1140 may represent one or more of a variety of different sensing mechanisms, such as a position sensor, an inertial measurement unit (IMU), a depth camera assembly, a structured light emitter and/or detector, or any combination thereof). Tancik does not expressly disclose a first NeRF and a second NeRE at different viewpoints; planning and controlling an object grasp by a robot in response to fusing NeRFs by registration and blending from images captured by a sensor processor. However, Xu discloses a first NeRF and a second NeRE at different viewpoints (Xu- Figs. 1A-1B and ¶0027, at least disclose a novel NeRF-based method for synthesizing real-world audio-visual scenes (i.e., “AV-NeRF”). NeRF implicitly and continuously represents the visual scene using neural fields and the neural fields can be used to render novel views, as illustrated in FIGS. 1A and 1B […] a set of input images from views 102, 104, and 106 (images at different viewpoints). In one example, NeRF may receive views 102, 104, and/or 106 and render views 108 and/or 110 [a first NeRF and a second NeRE at different viewpoints] as shown in FIG. 1B; Fig. 2A shows videos accompanied by audio that are input to an Audio-Visual NeRF (AV-NeRF) system for synthesizing novel views; Fig. 2B shows audio-visual scenes rendered by the AV-NeRF system based on the input videos illustrated in FIG. 2A); It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Tancik to incorporate the teachings of Xu, and apply the NeRF images at different viewpoints into the first NeRF and the second NeRF, as taught by Tancik, for re-rendering a first NeRF and a second NeRF at different viewpoints to form synthesized images from the first NeRF and the second NeRF. Doing so would ensure perceptual realism and immersion. The prior art does not expressly disclose, but Tremblay discloses planning and controlling an object grasp by a robot in response to fusing NeRFs by registration and blending from images captured by a sensor processor (Tremblay-¶0057-0059, at least discloses a system utilizes a representation for object rendering, 3D reconstruction, and grasp pose prediction that can be inferred from a single image. The system may utilize various neural radiance fields (NeRFs) and may utilize category-level priors and fine-tune on novel objects with minimal data and time […] the system, also referred to as a neural fields system for robotics object manipulation (NiFR), when utilized in connection with one or more other systems such as those of a robot, obtains as input a single image with basic object annotations (e.g., object poses and volumes), and calculates object latent codes, which can be utilized to re-edit the scene (e.g., render unseen views of the objects) and retrieve grasping poses for the objects […] a NeRF is a function parameterized by a neural network that maps a coordinate and direction vector in Euclidean space to two scalars: color and density. A NeRF can be queried on coordinates along rays to render an image of the scene from any viewpoint […] The system may utilize a representation that can be used to re-render scenes and generate grasping poses; ¶0063, at least discloses The robot may comprise a robotic component, such as a gripper, which can be utilized to grasp the one or more objects; ¶0068, at least discloses To predict grasps, the system may utilize the grasp decoder 110, which can be denoted by Φ. The two decoders may share the parameters of their first two layers. The system may utilize the grasp decoder 110 to predict stable grasps on an object o=(Po, Vo, fo) that may be manipulated. The grasp prediction process may include a grasp proposal stage and a filtering stage [Wingdings font/0xE0] suggests planning and controlling an object grasp; ¶0069, at least discloses A grasp proposal may indicate or otherwise be associated with a value, also referred to as a grasp score, that indicates a probability that a particular grasp configuration or orientation indicated by the grasp proposal, when utilized in connection with a robotic component grasping an object, will result in a successful or stable grasp of the object. A grasp proposal may indicate or otherwise be associated with one or more coordinate values (e.g., corresponding to a location of an object) [planning an object grasp]; ¶0075, at least discloses An open grasp configuration may refer to a configuration of a robotic component in which the robotic component may be able to grasp an object (e.g., if the robotic component is a gripper, the open grasp configuration may be the gripper opened). A closed grasp configuration may refer to a configuration of a robotic component in which the robotic component may be grasping an object (e.g., if the robotic component is a gripper, the closed grasp configuration may be the gripper closed to grasp an object); ¶0096, at least discloses The object may be any suitable object that may be grasped by a robot, which may be in the scene. The robot may comprise a robotic component, such as a gripper, which can be utilized by the robot to grasp objects in the scene [an object grasp by a robot]; ¶0158, at least discloses controller(s) 1036 provide signals for controlling one or more components and/or systems of vehicle 1000 in response to sensor data received from one or more sensors (e.g., sensor inputs); ¶0187, at least discloses RISC cores may interact with image sensors (e.g., image sensors of any cameras described herein), image signal processor(s); ¶0189, at least discloses VPU core may include a digital signal processor such as, for example, a single instruction, multiple data (“SIMD”), very long instruction word (“VLIW”) digital signal processor [sensor processor]; ¶0202, at least discloses processor(s) 1010 may further include a high-dynamic range signal processor [sensor processor] that may include, without limitation, an image signal processor [sensor processor]that is a hardware engine that is part of a camera processing pipeline; ¶0245, at least discloses a control panel (e.g., for controlling and/or interacting with various components, features, and/or systems), and/or other components; ¶0385, at least discloses preROP 2042 unit can perform optimizations for color blending, organizing pixel color data, and performing address translations; ¶0692, at least discloses one or more components of systems and/or processors disclosed above can communicate with one or more CPUs, ASICs, GPUs, FPGAs, or other hardware, circuitry, or integrated circuit components that include, e.g., an upscaler or upsampler to upscale an image, an image blender or image blender component to blend, mix, or add images together, a sampler to sample an image (e.g., as part of a DSP), a neural network circuit that is configured to perform an upscaler to upscale an image (e.g., from a low resolution image to a high resolution image), or other hardware to modify or generate an image, frame, or video to adjust its resolution, size, or pixels; one or more components of systems and/or processors). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Tancik/Xu to incorporate the teachings of Tremblay, and apply the grasp prediction process and the configuration of a robotic component to grasp an object into the Tancik/Xu’s teachings, for planning and controlling an object grasp by a robot in response to fusing NeRFs by registration and blending from images captured by a sensor processor. Doing so the amount of memory, time, or computing resources used to generate grasp proposals can be improved. Regarding claim 2, Tancik in view of Xu and Tremblay, discloses the method of claim 1, and further discloses in which the first NeRF and the second NeRF comprise representations of a 3D scene (Tancik- Figure 2 and page 8251, section 4.1. Block Size and Placement, at least discloses we place Block-NeRFs along a street segment at uniform distances and define the block size to be a sphere around the origin of the blocks (see Figure 2); page 8253, section 4.3.3 Appearance Matching, at least discloses Given a target appearance in one of the Block-NeRFs, we match its appearance in the remaining blocks. To this end, we first select a 3D matching location between pairs of adjacent Block-NeRFs). Regarding claim 3, Tancik in view of Xu and Tremblay, discloses the method of claim 1, and further discloses in which blending comprises compositing predictions from the first NeRF and the second NeRF to form blended images having an image quality greater than images individually rendered by any one of the first NeRF or the second NeRF (Tancik- Figure 2 shows The scene is split into multiple Block-NeRFs [first NeRF and a second NeRE] that are each trained on data within some radius (dotted orange line) of a specific Block-NeRF origin coordinate (orange dot). To render a target view in the scene [blended images], the visibility maps are computed for all of the NeRFs within a given radius. Block-NeRFs with low visibility are discarded (bottom Block-NeRF) and the color output is rendered for the remaining blocks. The renderings are then merged [blending] based on each block origin’s distance to the target view. Figure 2 further shows Block-NeRFs with low visibility are discarded and the “Color Prediction” with high visibility (bottom Block-NeRF); page 8252, section 4.2.5 Visibility Prediction, at least discloses When merging multiple Block-NeRFs, it can be useful to know whether a specific region of space was visible to a given NeRF during training […] Our visibility prediction is similar to the visibility fields proposed by Srinivasan et al. [57]. However, they used an MLP to predict visibility to environment lighting to recover a relightable NeRF model, while we predict visibility to training rays […] The visibility network is small and can be run independently from the color and density networks. This proves useful when merging multiple NeRFs, since it can help to determine whether a specific NeRF is likely to produce meaningful outputs for a given location, as explained in § 4.3.1. The visibility predictions can also be used to determine locations to perform appearance matching between two NeRFs). Regarding claim 4, Tancik in view of Xu and Tremblay, discloses the method of claim 1, and further discloses in which a scene is represented by a plurality of NeRFs (Tancik- Figure 2 shows The scene is split into multiple Block-NeRFs [plurality of NeRFs] that are each trained on data within some radius (dotted orange line) of a specific Block-NeRF origin coordinate (orange dot)). Regarding claim 5, Tancik in view of Xu and Tremblay, discloses the method of claim 1, and further discloses in which each of the plurality of NeRFs having a neighbor NeRF (Tancik- page 8253, section 4.3.3 Appearance Matching, at least discloses Starting from a root Block-NeRF, we propagate the optimized appearance through the scene by iteratively optimizing the appearance of its neighbors. If multiple blocks surrounding a target Block-NeRF have already been optimized, we consider each of them when computing the loss). Regarding claim 6, Tancik in view of Xu and Tremblay, discloses the method of claim 1, and further discloses further comprising training the first NeRF and the second NeRE on a separate set of images (Tancik- page 8251, section 4. Method, at least discloses a set of Block-NeRFs that can be independently trained in parallel and composited during inference. This independence enables the ability to expand the environment with additional Block-NeRFs or update blocks without retraining the entire environment). Regarding claim 7, Tancik in view of Xu and Tremblay, discloses the method of claim 6, and further discloses in which the separate set of images capture different, overlapping, portions of the same scene (Tancik- page 8251, section 4.1. Block Size and Placement, at least discloses The individual Block-NeRFs should be arranged such that they collectively achieve full coverage of the target environment. We typically place one Block-NeRF at each intersection, covering the intersection itself and any connected street 75% of the way until it converges into the next intersection (see Figure 1). This results in a 50% overlap between any two adjacent blocks on the connecting street segment, making appearance alignment easier between them; page 8254, section 5.3. Block-NeRF Size and Placement, at least discloses We compare performance on our Mission Bay dataset versus the number of Block-NeRFs used. We show details in Table 2, where depending on granularity, the Block-NeRF sizes range from as small as 54m to as large as 544m. We ensure that each pair of adjacent blocks overlaps by 50% and compare other overlap percentages in the supplement). Regarding claims 9-15, all claim limitations are set forth as claims 1-7 in a non-transitory computer-readable medium having program code recorded thereon for fusing neural radiance fields (NeRFs), the program code being executed by a processor and rejected as per discussion for claim 1-7. Regarding claim 9, Tancik in view of Xu and Tremblay, discloses a non-transitory computer-readable medium having program code recorded thereon for fusing neural radiance fields (NeRFs), the program code being executed by a processor (Tancik- page 8253, section 4.3.3 Appearance Matching, at least discloses We can control the appearance of our learned models by an appearance latent code after the Block-NeRF has been trained. These codes are randomly initialized during training and the same code therefore typically leads to different appearances when fed into different Block-NeRFs; page 8253, section 5.1. Datasets, third paragraph, at least discloses We divide this dataset into 35 Block-NeRFs […] After filtering out some redundant image captures (e.g. stationary captures), each Block-NeRF is trained on between 64,575 to 108,216 images. The overall dataset is composed of 13.4h of driving time sourced from 1,330 different data collection runs, with a total of 2,818,745 training images; Xu- ¶0105, at least discloses A non-transitory computer-readable medium may include one or more computer-readable instructions that, when executed by at least one processor of a computing device; Tremblay- Fig. 5 and ¶0094, at least disclose some or all of process 500 (or any other processes described herein, or variations and/or combinations thereof) is performed under control of one or more computer systems configured with computer-executable instructions and is implemented as code (e.g., computer-executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, software, or combinations thereof) and comprising the method of claim 1. The system of claims 17-20 are similar in scope to the functions performed by the method of claims 1, 3 and 5 and therefore claims 17-20 are rejected under the same rationale. Regarding claim 17, Tancik in view of Xu and Tremblay, discloses a system for fusing neural radiance fields (NeRFs) (Xu- Fig. 1A shows example images input to a Neural Radiance Fields (NeRF) system for synthesizing novel views; Fig. 5A shows recording devices for capturing audio-visual input for an AV-NeRF system; Figs. 11-12 show exemplary virtual reality and augmented reality devices that may include audio-visual scene synthesis systems; Tremblay- Fig. 10C and ¶0173, at least disclose controller(s) 1036 may be used for a variety of functions. In at least one embodiment, controller(s) 1036 may be coupled to any of various other components and systems of vehicle 1000, and may be used for control of vehicle 1000, artificial intelligence of vehicle 1000, infotainment for vehicle 1000, and/or other functions; ¶0219, at least discloses vehicle 1000 may further include GNSS sensor(s) 1058 (e.g., GPS and/or assisted GPS sensors), to assist in mapping, perception, occupancy grid generation, and/or path planning functions), the system comprising the method of claim 1. Conclusion 6. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL LE whose telephone number is (571)272-5330. The examiner can normally be reached 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL LE/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Jan 31, 2024
Application Filed
Oct 31, 2025
Non-Final Rejection — §103
Jan 14, 2026
Examiner Interview Summary
Jan 14, 2026
Applicant Interview (Telephonic)
Jan 20, 2026
Response Filed
Mar 26, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579211
AUTOMATED SHIFTING OF WEB PAGES BETWEEN DIFFERENT USER DEVICES
2y 5m to grant Granted Mar 17, 2026
Patent 12579738
INFORMATION PRESENTING METHOD, SYSTEM THEREOF, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12579072
GRAPHICS PROCESSOR REGISTER FILE INCLUDING A LOW ENERGY PORTION AND A HIGH CAPACITY PORTION
2y 5m to grant Granted Mar 17, 2026
Patent 12573094
COMPRESSION AND DECOMPRESSION OF SUB-PRIMITIVE PRESENCE INDICATIONS FOR USE IN A RENDERING SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12558788
SYSTEM AND METHOD FOR REAL-TIME ANIMATION INTERACTIVE EDITING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
88%
With Interview (+22.1%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 864 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month