Prosecution Insights
Last updated: April 19, 2026
Application No. 18/779,348

METHODS AND SYSTEMS FOR LIGHT PROBE GENERATION

Non-Final OA §102§103
Filed
Jul 22, 2024
Examiner
RENZE, GEORGE NICHOLAS
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Sony Interactive Entertainment Inc.
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
16 granted / 24 resolved
+4.7% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
33 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
73.3%
+33.3% vs TC avg
§102
16.0%
-24.0% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 24 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The disclosure is objected to because of the following informalities: Paragraph 7, Line 2, “claim 15” should be changed to “claim 16”, because claim 16 refers to a system. Paragraph 17, Line 2, the words “or each” should be removed before “RAM” so that it then reads as “The Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-6, 12 and 15-16 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kurz et al. (U.S. Patent: #12,002,165 B1), hereinafter Kurz. Regarding claim 1, Kurz discloses a method of generating a light probe in a virtual environment (FIG. 6 and Col. 8, Lines 30-32 teach that FIG. 6 is a flowchart representation of a method of providing a view of a 3D environment based on 3D positions determined for the set of light probes.), the method comprising: determining a first region of the light probe based on one or more predetermined conditions (Col. 8, Line 63 through Col. 9, Line 13 teach that at block 620, the method 600, based on the digital representation, determines 3D positions in a 3D environment for a set of light probes, e.g., light probes 210, 220, and 230 discussed herein with respect to FIGS. 2 and 3. The 3D positions may be determined based on environment geometry (e.g., flat surfaces, empty space, etc.), semantic understanding of physical environment (e.g., identifying tables, chairs, walls, desks, etc.), and/or context (e.g., identifying what the user will do, what/where virtual objects are permitted and/or are likely to be added, etc.). Determining the 3D positions may involve determining centers of projection to use for obtaining data for each of the light probes. Determining the 3D positions may be further based on an already-defined light probe, e.g., a user-defined light probe, which may be immutable. Accordingly, the 3D positions may be determined based on an existing immutable light probe that will be complemented by one or more additional light probes positioned at the 3D positions.); collecting light information from the virtual environment, the light information relating to light passing through a location of the light probe in the virtual environment (Col. 7, Lines 56-67 teach that the light probes 210, 220, 230 provide lighting information that can be used to provide realistic appearances for virtual objects placed at or near light probe locations in the 3D environment 200. The 3D environment 200 may be a real-time environment that is based at least in part on a live physical environment. The light probe locations may be based on aspects of such a physical environment. For example, a digital representation of the geometry, semantics, or other attributes of a live physical environment may be input used to determine where to position the light probes, how many light probes to use, and/or various light probe attributes.); and storing the light information in the light probe, wherein the light information is stored at a higher resolution in the first region of the light probe than in a, different, second region of the light probe (Col. 5, Lines 48-52 teach that a light probe stores lighting information using functions defined on the surface of a sphere, such as in the form of the coefficients of basis functions, such as Spherical Harmonics. Additionally, Col. 9, Lines 33-42 teach that the method 600 may determine type and/or resolution (e.g., angular, cube map face, equirectangular, etc.) to use for the light probes based on the digital representation. For example, if one portion of an environment has a significantly greater amount of detail than other portions, a higher resolution may be used for the light probes positioned proximate that portion of the environment. Lower resolution representations (which may require fewer computing and time resources) may be used for light probes positioned proximate the other, less-detailed portions of the environment.). Regarding claim 2, Kurz discloses everything claimed as applied above (see claim 1), in addition, Kurz discloses wherein collecting light information comprises a plurality of light collection operations for different regions of the light probe (FIG. 2 and Col. 5, Lines 1-12 teach that FIG. 2 illustrates a first light probe 210 above the representation 130a of the first table 130, a second light probe 220 above the representation 140a of the second table 140, and a third light probe 230 above a floor portion of the 3D environment 200. The light probes 210, 220, 230 provide lighting information that describes light incident on respective points in space in the 3D environment 200. Since the 3D environment is based at least in part on the physical environment 100, the light probes may represent lighting information that describes light incident on certain points in space based on the lighting arrangement in the physical environment.), and wherein storing the light information comprises storing the result of light collection operations for the first region at a higher resolution than for light collection operations for the second region (Col. 9, Lines 35-42 teach that if one portion of an environment has a significantly greater amount of detail than other portions, a higher resolution may be used for the light probes positioned proximate that portion of the environment. Lower resolution representations (which may require fewer computing and time resources) may be used for light probes positioned proximate the other, less-detailed portions of the environment.). Regarding claim 3, Kurz discloses everything claimed as applied above (see claim 2), in addition, Kurz discloses wherein storing the result of a light collection operation for the first region at a higher resolution than for light collection operations for the second region comprises storing the result of each light collection operation for the first region in fewer pixels of the light probe than for light collection operations for the second region (Col. 5, Lines 13-19 teach that light probes 210, 220, 230 may be implemented in various ways. FIG. 5 illustrates exemplary lighting information of a light probe that represents light intensity data from all directions around a point in a 3D environment. The pixel values in such a map of lighting data each represent a light intensity value of light received from a given direction at the point in the 3D environment.). Regarding claim 4, Kurz discloses everything claimed as applied above (see claim 1), in addition, Kurz discloses wherein the one or more predetermined conditions comprise at least one condition relating to one or more objects in the virtual environment (Col. 9, Lines 14-32 teach that the method 600 may determine the geometric boundaries (e.g., 3D bounding boxes) of light probes that define the extent of space in the 3D environment for which each light probe is applicable. The geometric boundaries of the light probes, for example, may be determined to correspond to nearby table tops, floor portions, or other surfaces. The geometric boundaries of light probes may be based on semantic understanding of objects in the physical environment. For example, an object that is a desk may be more likely to have virtual object placed upon it than other types of objects and thus may be a relatively more important portion of the environment with respect to accurate lighting than the other portions. A light probe may be positioned above the desk and assigned a geometric boundary corresponding to the exact desk dimensions to ensure that virtual object placed on the desk will have desirable lighting. Example geometric boundaries include geometric boundaries 310, 320, and 330 discussed herein with respect to FIG. 3.). Regarding claim 5, Kurz discloses everything claimed as applied above (see claim 4), in addition, Kurz discloses wherein the first region is determined in dependence on whether a region of the light probe is facing at least one object of the one or more objects (FIG. 4 and Col. 8, Lines 1-17 teach that FIG. 4 illustrates providing virtual objects in the 3D environment of FIGS. 2 and 3 based on two of the light probes 210, 220. In this example, the first virtual vase 410a is positioned on the representation 130a of the first table 130 and the second virtual vase 410b is positioned on the representation 140a of the second table 140. Based on its position (e.g., within the geometric boundary 310 (FIG. 3)), the first light probe 210 is used to provide the appearance of the first virtual vase 410a. For example, given the proximity and position of the first light source representation 450 (and thus the corresponding light source 150), the left side of the first virtual vase 410a may be more illuminated/brighter than the right side of the first virtual vase 410a. Similarly, given the relative proximity and position of the first light source representation 450 (and thus the corresponding light source 150), virtual shadow 420a may be displayed on the right side of the first virtual vase 410a.). Regarding claim 6, Kurz discloses everything claimed as applied above (see claim 4), in addition, Kurz discloses wherein the first region is determined in dependence on one or more characteristics of the one or more objects (Col. 11, Lines 13-27 teach that in a general semantic approach, semantic information is derived for a 3D mesh representing the environment and light probes are positioned above surfaces of particular types of objects. For example, light probes may be positioned based on prioritizing tables over chairs, floors over walls and ceilings, etc. In some implementations, a combined semantic and general approach positions light probes based on identifying nearby objects of particular types, e.g., tables within 2 meters. In other rule-based implementations, light probes are placed based additionally or alternatively based upon semantic information regarding lights, e.g., placing light probes where light changes spatially, for example, relative to windows and other light sources. This may involve, for example, placing relatively more light probes there were there are more spatial light changes.), wherein the one or more characteristics of the objects comprise one or more selected from the list consisting of: brightness, surface roughness, shape, level of detail, or proximity to other objects (Col. 11, Lines 53-63 teach that the light probes may additionally or alternatively be positioned and configured based on attributes of the virtual objects that can be or are positioned within the 3D environment. For example, light probe position and/or configuration may be based on the pose, shape, and/or materials of such virtual objects. In one example, based on information that a shiny object is positioned at a particular position within a 3D environment, one or more light probes may be positioned nearby with a relatively high resolution to ensure accurate display of the shiny object and/or other objects positioned nearby.). Regarding claim 12, Kurz discloses everything claimed as applied above (see claim 1), in addition, Kurz discloses wherein the first region and the second region comprise different regions of a face of the light probe, or the first region and the second region comprise different faces of the light probe (Col. 5, Lines 33-44 of Kurz teaches that the 3D environment 200 may be re-projected onto 6 faces of a cube, onto a sphere, or two paraboloids using a single center of projection. The re-projection may then be mapped from the given surfaces onto a 2D texture via texture mapping and represents the lighting of the 3D environment 200 for the given center of projection. For example, a cube map of the 3D environment 200 may be rendered from the position of light probe 210 by projecting 6 renderings that represent the 6 faces of a cube into an environment map. The light probe 200 may be defined by capturing and storing data that defines the surroundings 3D environment 200 from its position in the 3D environment. Additionally, Col. 9, Lines 33-35 teach that the method 600 may determine type and/or resolution (e.g., angular, cube map face, equirectangular, etc.) to use for the light probes based on the digital representation.). Regarding claim 15, the non-transitory computer-readable storage medium correlates to and is rejected similarly to the method steps of claim 1 (see claim 1 above). Additionally, Kurz discloses a non-transitory computer-readable storage medium storing a computer program comprising computer executable instructions (Col. 8, Lines 37-39 teach that in some implementations, the method 600 is performed by a processor executing code stored in a non-transitory computer-readable medium (e.g., a memory).). Regarding claim 16, the system steps correlate to and are rejected similarly to the method steps of claim 1 (see claim 1 above). Additionally, Kurz discloses a system for generating a light probe in a virtual environment (FIG. 7 and Col. 15, Lines 19-29 teach that FIG. 7 is a block diagram illustrating exemplary components of the device 120 configured in accordance with some implementations. While certain specific features are illustrated, those skilled in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity, and so as not to obscure more pertinent aspects of the implementations disclosed herein. To that end, as a non-limiting example, in some implementations the device 120 includes one or more processing units 702 (e.g., microprocessors, ASICs, FPGAs, GPUs, CPUs, processing cores, and/or the like). Additionally, Col. 16, Lines 26-31 teach that the memory 720 or the non-transitory computer readable storage medium of the memory 720 stores an optional operating system 730 and one or more instruction set(s) 740. The operating system 730 includes procedures for handling various basic system services and for performing hardware dependent tasks.). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 7, 9-11 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Kurz in view of Karyodisa et al. (U.S. Patent: #10,699,126 B2), hereinafter Karyodisa. Regarding claim 7, Kurz discloses everything claimed as applied above (see claim 1), in addition, Kurz disclose wherein the one or more predetermined conditions comprise at least one condition relating to the light probe in frames of the virtual environment (Col. 6, Line 64 through Col. 7, Line 3 teach that the light probes 210, 220, 230 may be dynamic where the light probe positions and/or the lighting represented by the light probes is updated to represent a dynamically changing 3D environment 200. For example, light probes may be computed at runtime for every frame or based on the occurrence of various triggers and used to provide views of the 3D environment 200 over time.). However, Kurz fails to disclose in one or more previous frames of the virtual environment. Karyodisa discloses in one or more previous frames of the virtual environment (Col. 12, Lines 27-42 teach that another illustrative example of an object tracking technique includes a key point technique. Using face tracking as an example, the key point technique can include detecting some key points from a detected face (or other object) in a previous frame. For example, the detected key points can include significant points on face, such as facial landmarks (described in more detail below). The key points can be matched with features of objects in a current frame using template matching. Examples of template matching methods can include optical flow (as described above), local feature matching, and/or other suitable techniques. In some cases, the local features can be histogram of gradient, local binary pattern (LBP), or other features. Based on the tracking results of the key points between the previous frame and the current frame, the faces in the current frame that match faces from a previous frame can be located.). Since Kurz teaches utilizing light probes for object detecting through various conditions and the light probes are used during multiple frames to capture any of the object’s condition data and Karyodisa teaches detecting and tracking objects using various known object tracking techniques, such as the key point technique which incorporates object tracking via data captured at current and previous frames, it would have been obvious to a person having ordinary skill in the art to combine the features together so that when tracking any conditional data related to the light probes, a technique like the key point technique could be utilized to capture condition data related to the light probes from any previous frames within the virtual environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kurz to incorporate the teachings of Karyodisa, so that the combined features together would allow for the capturing of conditional data related to the light probes in previous frames, which would thus help reduce computational load and allow for faster data processing, which would benefit any type of object tracking within a virtual environment. Regarding claim 9, Kurz in view of Karyodisa disclose everything claimed as applied above (see claim 7), in addition, Kurz in view of Karyodisa disclose wherein the first region is determined in dependence on whether: a region of the light probe was used to light one or more objects in the virtual environment in the one or more previous frames (Col. 12, Lines 23-34 of Kurz teach that the light probe updates may involve any of the determinations (e.g., machine learning-based, rule-based, optimization-based) described herein. Moreover, such determinations may additionally account for a prior set of light probes. For example, a machine learning model may be trained with an additional loss on the difference between the previous light probes and the current/new light probes. An optimization-based approach may use an additional cost for changes between previous and current/new light probes. Accounting for previous light probes can encourage temporal consistency, reducing noticeable changes that might otherwise occur due to light probe transitions. Additionally, FIGS. 4A and 4B and Col. 13, Lines 57-63 of Karyodisa teach that the video frames 400A and 400B shown in FIG. 4A and FIG. 4B illustrate two frames of a video sequence capturing images of a scene. The multiple faces in the scene captured by the video sequence can be detected and tracked across the frames of the video sequence, including frames 400A and 400B. The frame 400A can be referred to as a previous frame and the frame 400B can be referred to as a current frame.). Regarding claim 10, Kurz in view of Karyodisa disclose everything claimed as applied above (see claim 7), in addition, Kurz in view of Karyodisa disclose wherein the first region is determined in dependence on whether a region was facing a virtual camera in the virtual environment in the one or more previous frames (FIG. 4 and Col. 8, Lines 1-17 of Kurz teach that FIG. 4 illustrates providing virtual objects in the 3D environment of FIGS. 2 and 3 based on two of the light probes 210, 220. In this example, the first virtual vase 410a is positioned on the representation 130a of the first table 130 and the second virtual vase 410b is positioned on the representation 140a of the second table 140. Based on its position (e.g., within the geometric boundary 310 (FIG. 3)), the first light probe 210 is used to provide the appearance of the first virtual vase 410a. For example, given the proximity and position of the first light source representation 450 (and thus the corresponding light source 150), the left side of the first virtual vase 410a may be more illuminated/brighter than the right side of the first virtual vase 410a. Similarly, given the relative proximity and position of the first light source representation 450 (and thus the corresponding light source 150), virtual shadow 420a may be displayed on the right side of the first virtual vase 410a. Additionally, Col. 13, Lines 57-63 of Karyodisa teach that the video frames 400A and 400B shown in FIG. 4A and FIG. 4B illustrate two frames of a video sequence capturing images of a scene. The multiple faces in the scene captured by the video sequence can be detected and tracked across the frames of the video sequence, including frames 400A and 400B. The frame 400A can be referred to as a previous frame and the frame 400B can be referred to as a current frame.). Regarding claim 11, Kurz discloses everything claimed as applied above (see claim 1), however, Kurz fails to disclose further comprising predicting motion, relative to the light probe, of an object in the virtual environment based on one or more previous frames of the virtual environment. Karyodisa discloses further comprising predicting motion, relative to the light probe, of an object in the virtual environment based on one or more previous frames of the virtual environment (Col. 12, Lines 8-19 A Kalman filter based object tracker uses signal processing to predict the location of a moving object based on prior motion information. For example, the location of a tracker in a current frame can be predicted based on information from a previous frame. In some cases, the Kalman filter can measure a tracker's trajectory as well as predict its future location(s). For example, the Kalman filter framework can include two steps. The first step is to predict a tracker's state, and the second step is to use measurements to correct or update the state. In this case, the tracker from the last frame can predict its location in the current frame.). Since Kurz teaches the initial method steps for utilizing light probes for object detecting through various conditions and the light probes are used during multiple frames to capture any of the object’s condition data and Karyodisa teaches detecting and tracking objects using various known object tracking techniques, such as utilizing a Kalman filter to help track an object’s motion by collecting data based on an objects movement through previous frames, it would have been obvious to a person having ordinary skill in the art to combine the features together so that when tracking any conditional data related to the light probes, a technique like using a Kalman filter could be utilized to capture motion data related to the light probes from any previous frames within the virtual environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kurz to incorporate the teachings of Karyodisa, so that the combined features together would allow for the incorporation of using a technique like the Kalman filter, to track and acquire an objects motion data based on data related to any of the previous frames, which would help improve overall accuracy, as well as improve any data related to predicting an object’s movement throughout the virtual environment. Furthermore, Kurz in view of Karyodisa disclose wherein the first region is determined in dependence on the predicted motion of the object in the virtual environment (Col. 31, Lines 24-33 of Karyodisa teach that in some examples, the process 900 includes determining object information associated with the one or more detected objects based on the object detection performed on the one or more video frames of the first video. The object information can include a bounding region (e.g., a bounding box, a bounding ellipse, or other suitable bounding region) for each object, information defining one or more landmarks on the object, information defining an angle, orientation, and/or pose of the object, and/or other suitable information that can be used for performing the object recognition process. Additionally, Col. 12, Lines 19-26 of Karyodisa teach that when the current frame is received, the tracker can use the measurement of the object in the current frame to correct its location in the current frame, and then can predict its location in the next frame. The Kalman filter can rely on the measurement of the associated object(s) to correct the motion model for the object tracker and to predict the location of the tracker in the next frame.). Regarding claim 14, Kurz discloses everything claimed as applied above (see claim 1), however, Kurz fails to disclose further comprising applying upscaling to the stored light information in the first and/or second region of the light probe to increase the resolution of the stored light information. Karyodisa discloses further comprising applying upscaling to the stored light information in the first and/or second region of the light probe to increase the resolution of the stored light information (Col. 31, Lines 39-54 teach that in some examples, modifying the object information from the first resolution to the second resolution includes upscaling the object information from the first resolution to the second resolution. As noted above, object information associated with a detected object can include information defining a bounding region generated for the detected object. In some cases, modifying the object information from the first resolution to the second resolution includes upscaling the bounding region from a first size to a second size. The second size is based on a ratio between the first resolution and the second resolution. For instance, as described above, if the high resolution video is 3840×2160 and the low resolution video is 960×540, the ratio between the first resolution and the second resolution is 4 (3840/960=4 and 2160/540=4). In such an example, the bounding box can be upscaled by a factor of 4.). Since Kurz teaches the initial method steps for generating light probes for collecting light information within a virtual environment with respect to the light probe’s resolution and Karyodisa teaches using and modifying object information within a virtual environment, including it’s resolution and having the capability to upscale the resolution associated with that object, it would have been obvious to a person having ordinary skill in the art to combine the features together so that the resolution data associated with the light probe object could also be upscaled to different resolutions if necessary. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kurz to incorporate the teachings of Karyodisa, so that the combined features together would allow for any upscaling to be performed to a light probe’s resolution data, so that any potential low resolution data captured could then be improved upon, thus creating data consisting of better quality and overall detail. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Kurz in view of Sun et al. (Pub. No.: US 2018/0089894 A1), hereinafter Sun. Regarding claim 13, Kurz discloses everything claimed as applied above (see claim 1), however, Kurz fails to disclose wherein collecting light information comprises importance sampling of the light information. Sun discloses wherein collecting light information comprises importance sampling of the light information (Paragraph 36 teaches that accordingly, in one or more embodiments, the digital full path rendering system combines Gaussians of different dimensions. In particular, rather than separately estimating the global light transport function with Gaussian functions for a plurality of dimensions, the digital full path rendering system utilizes combination weights to jointly fit Gaussian functions for a plurality of dimensions to the global light transport function (e.g., utilizing multiple importance sampling techniques). In this manner, the digital full path rendering system avoids generating redundant distributions across multiple dimensions. Moreover, the digital full path rendering system reduces the complexity of estimating global light transport functions.). Since Kurz teaches the initial method steps for generating light probes for collecting light information within a virtual environment and Sun teaches collecting light information using various importance sampling techniques, it would have been obvious to a person having ordinary skill in the art to combine the features together so that while collecting light information in relation to the different light probes, various importance sampling techniques, like a Gaussian function, could also be incorporated when collecting that data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kurz to incorporate the teachings of Sun, so that the combined features together would allow for importance sampling to be incorporated into the process of collection light information which would help reduce the overall computational costs needed in generating the light probes by prioritizing and using samples that will have the most impact on the area of data associated with a particular light probe. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Kurz in view of Karyodisa as applied to claim 7 above, and further in view of Sun. Regarding claim 8, Kurz in view of Karyodisa disclose everything claimed as applied above (see claim 7), in addition, Kurz in view of Karyodisa disclose wherein the first region is determined in dependence on whether a region of the light probe had one or more characteristics meeting a predetermined condition in the one or more previous frames (Col. 25, Lines 45-55 of Karyodisa teach that in one example use case, low light conditions may be present, which may lead to poor object detection results being provided by the object detection engine 632. In such cases, the object detection and recognition system 600 can automatically increase the video resolution of the low resolution video stream 631 so that the object detection engine 632 can perform more accurate object detection. In such an example, when lighting conditions get worse, leading to a drop in face detection accuracy, an object detection and recognition system can increase the resolution of the low resolution video to improve object detection.). However, Kurz in view of Karyodisa fail to disclose wherein the one or more characteristics of the light probe comprise one or more selected from the list consisting of: luminosity, or colour. Sun discloses wherein the one or more characteristics of the light probe comprise one or more selected from the list consisting of: luminosity, or colour (Paragraph 107 teaches that as mentioned previously, in one or more embodiments, the digital full path rendering system generates a digital image of a virtual environment based on sampled paths. Indeed, by identifying full light paths between a light source and a camera perspective in a virtual environment, the digital full path rendering system generates a digital image of the virtual environment. To illustrate, as described above, the digital full path rendering system determines an estimation of light transfer corresponding to full light paths between a light source and the camera perspective. The digital full path rendering system utilizes the estimated light transfer to determine a pixel (e.g., color, brightness, or luminosity) in a digital image representing the virtual environment from the camera perspective.). Since Kurz in view of Karyodisa teach the initial method steps for generating light probes for collecting light information within a virtual environment in accordance with different pixel characteristics of the light probe and Sun teaches collecting light information within a virtual environment including specifically determining a particular pixel’s color, brightness or luminosity, it would have been obvious to a person having ordinary skill in the art to combine the features together so that the light information gathered would include a particular pixel’s data information in regards to its color, brightness or luminosity. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kurz in view of Karyodisa to incorporate the teachings of Sun, so that the combined features together would provide the capabilities of storing light information related to the luminosity and color of a particular region and/or pixel and thus provide better overall lighting effect quality for the user to experience. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Lui (U.S. Patent: # 12,026,822 B2) teaches computing geometry in a virtual environment by using at least part of a projection of a light source. Samec et al. (U.S. Patent: # 11,734,896 B2) teaches a display system comprising a head-mountable, augmented reality display that is configured to provide a perception aid that may include displaying virtual content by outputting light information. Any inquiry concerning this communication or earlier communications from the examiner should be directed to George Renze whose telephone number is (703)756-5811. The examiner can normally be reached Monday-Friday 9:00am - 6:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /G.R./Examiner, Art Unit 2613 /XIAO M WU/Supervisory Patent Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Jul 22, 2024
Application Filed
Feb 17, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602407
SYSTEMS AND METHODS FOR GENERATING A UNIQUE IDENTITY FOR A GEOSPATIAL OBJECT CODE BY PROCESSING GEOSPATIAL DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12573147
LANDMARK DATA COLLECTION METHOD AND LANDMARK BUILDING MODELING METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12555315
HEURISTIC-BASED VARIABLE RATE SHADING FOR MOBILE GAMES
2y 5m to grant Granted Feb 17, 2026
Patent 12530759
System and Method for Point Cloud Generation
2y 5m to grant Granted Jan 20, 2026
Patent 12505508
DIGITAL IMAGE RADIAL PATTERN DECODING SYSTEM
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+33.3%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 24 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month