Prosecution Insights
Last updated: April 19, 2026
Application No. 18/372,479

METHOD AND DATA PROCESSING SYSTEM FOR CREATING OR ADAPTING INDIVIDUAL IMAGES BASED ON PROPERTIES OF A LIGHT RAY WITHIN A LENS

Final Rejection §103§DP
Filed
Sep 25, 2023
Examiner
PEREN, VINCENT ROBERT
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Carl Zeiss AG
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
90%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
266 granted / 382 resolved
+7.6% vs TC avg
Strong +20% interview lift
Without
With
+20.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
15 currently pending
Career history
397
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
46.8%
+6.8% vs TC avg
§102
26.0%
-14.0% vs TC avg
§112
13.7%
-26.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 382 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Obligation Under 37 CFR 1.56 – Joint Inventors This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Response to Amendment Applicant’s amendment filed on August 22, 2025 has been entered. Claims 1, 7-10, 13-14 have been amended, and claims 11-12 have been canceled. Thus, claims 1-10 and 13-14 are still under consideration in this application, with claim 1 being independent. Claims 15-16 are withdrawn. Applicant’s amendment of August 22, 2025 overcomes the following objections/rejections: Double patenting rejection (terminal disclaimer has been filed). Objections to the title of the specification. Objections to claims 7, 8 and 13. Claim Interpretation Applicant has amended claim 14 to include sufficient structure to perform the recited acts. Thus, claim 14 is no longer being interpreted under 35 U.S.C. 112(f). Claim Objections Claim 1 is objected to because of the following informalities: Lines 1-2 of claim 1 recite: “creating a second series of individual images with a first series of individual images”. However, the meaning of this phrase is unclear and/or indefinite since it can be interpreted to have multiple different meanings. For instance, “creating a second series of individual images with a first series of individual images” can reasonably be interpreted to mean: (1) creating a second series of individual images and (i.e., “together with”) a first series of individual images; and/or (2) creating a second series of individual images using (i.e., “with”) a first series of individual images; and/or (3) creating a second series of images including (i.e., “with”) a first series of individual images. The examiner recommends amending to replace “with” using a less vague word (or, phrase), i.e., terminology having only one clear meaning. Clarification and appropriate correction is required. Lines 4-6 of claim 1 recite: “determining properties of a light ray within the lens for the individual images of the first series of individual images, wherein determining the properties of the light ray comprises recording a temporal series of imaging parameters of a virtual camera”. However, it is unclear whether the recited “lens” is a real lens of a real camera or a virtual lens of the virtual camera. Clarification and/or appropriate correction is required. Likewise, lines 8-9 recite: “properties of the light ray within the lens of a respective individual image of the first series of individual images”. As already noted above, it is unclear whether the recited “lens” is a real lens of a real camera or a virtual lens of the virtual camera. Clarification and appropriate correction is required. Furthermore, such clarification and/or appropriate correction is required for every recitation of “lens” or “camera” throughout the claims. Lines 12-13 recite: “capturing the first series of individual images with the recording of the temporal series of imaging parameters.” However, the meaning of this phrase is unclear and/or indefinite since it can be interpreted to have multiple different meanings. For instance, “capturing the first series of individual images with the recording of the temporal series of imaging parameters.” can reasonably be interpreted to mean: (1) capturing the first series of individual images at the same time as (i.e., “together with”) the recording of the temporal series of imaging parameters; and/or (2) capturing the first series of individual images using (i.e., “with”) the recording of the temporal series of imaging parameters; etc. The examiner recommends amending to replace “with” using a less vague word or phrase), i.e., terminology having only one clear meaning. Clarification and appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: Determining the scope and contents of the prior art; Ascertaining the differences between the prior art and the claims at issue; Resolving the level of ordinary skill in the pertinent art; and Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-7, 9-10, 13 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over LIU et al. (English machine translation of CN105488457A, hereinafter “LIU”) in view of NATTRESS (US 2005/0168485), further in view of CONNELL (US 2019/0392628). Regarding claim 1, LIU discloses a method for creating (¶ [0004]: “planning and creating films.” ¶ [0015]: “actual shooting”), wherein individual images of the first series (e.g., Abstract: “utilizes three-dimensional virtual simulation to design a satisfactory lens effect for the creator,” Abstract: “the camera lens” ¶ [0008]: “the camera lens.” ¶ [0032]: “shooting lenses”), the method comprising: determining properties of (e.g., ¶ [0008]: “precisely controlling the spatial coordinates of the camera lens.”) for the individual images of the first series of individual images (¶ [0012]: “adjust the virtual camera's pose relative to the 3D model of the predetermined shooting scene” ¶ [0012]: “In the 3D model of the predetermined shooting scene, based on the changed position and posture of the virtual camera, perform a real-time preview of the simulated shooting effect.”), wherein determining properties of the light ray comprises recording a temporal series of imaging parameters of a virtual camera (¶ [0012]: “Record the movement trajectory of the virtual camera in the 3D model during each virtual shooting process.”) (¶ [0002]: “virtual simulation of camera motion control.” ¶ [0002]: “a virtual simulation method and system designed for shooting footage using a camera motion control system during the pre-production phase of filmmaking,” ¶ [0004]: “motion capture systems are used to bind the tracked camera data to virtual cameras in the virtual scenes.” ¶ [0005]: “the positioning of the camera in space.” ¶ [0005]: “trajectory data” ¶ [0008]: “virtual simulation of a camera motion control system,” ¶ [0008]: “utilizes 3D virtual simulation to design shot effects that satisfy the creators, then automatically calculates the parameters of the proposed camera motion control system, precisely controlling the spatial coordinates of the camera lens” ¶ [0010]: “Set up the physical camera model and track and record its position and orientation;” ¶ [0011]: “Set the initial position and posture of the virtual camera in the 3D model.” ¶ [0012]: “Step 3: Perform the virtual shooting process: Move the physical camera model, and adjust the virtual camera's pose relative to the 3D model of the predetermined shooting scene according to the changes in the physical camera model's position and posture. That is, the movement trajectory of the physical camera model and the virtual camera's movement trajectory in the 3D model of the scene are consistent. In the 3D model of the predetermined shooting scene, based on the changed position and posture of the virtual camera, perform a real-time preview of the simulated shooting effect. Record the movement trajectory of the virtual camera in the 3D model during each virtual shooting process.” ¶ [0013]: “Adjust the pose of the physical camera model and the initial position and posture of the virtual camera in the 3D model based on the simulated shooting effect; until the simulated shooting effect of the virtual camera in the 3D model meets the director's requirements; save the motion trajectory of the virtual camera in the 3D model during the virtual shooting process;” ¶ [0043]: “Step 3: Perform the virtual shooting process: Move the physical camera model, and adjust the virtual camera's pose relative to the 3D model of the predetermined shooting scene based on the changes in the physical camera model's position and posture. That is, the movement trajectory of the physical camera model is consistent with the movement trajectory of the virtual camera in the 3D model of the scene. In implementation, the data of the physical camera model is directly assigned to the virtual camera in the 3D animation software, ensuring that the director can operate the virtual camera while operating the physical camera model.” ¶ [0044]: “Based on the position and posture of the virtual camera after the pose change, a real-time preview of the virtual camera's simulated shooting effect is performed in the 3D model of the predetermined shooting scene; the motion trajectory of the virtual camera in the 3D model during each virtual shooting process is recorded (i.e., the motion data of the physical camera model during the virtual shooting process, which are consistent).”); planning a capturing of the first series of individual images (e.g., Abstract: “picture planning and the like in the early stage of film shooting,” ¶ [0004]: “for planning and creating films.” ¶ [0004]: “during the film shooting planning stage, animation modeling software is used to build virtual scenes, and motion capture systems are used to bind the tracked camera data to virtual cameras in the virtual scenes.” ¶ [0008]: enable filmmakers to more intuitively adjust camera movement trajectories and shot order in the pre-production stage of filmmaking, based on their operating habits, perspectives, and shot planning.“ ¶ [0011]: “Step 2: Create a 3D model of the planned shooting scene. The spatial data of the 3D model is consistent with the real scene. Set the initial position and posture of the virtual camera in the 3D model. The initial position is set according to the director's predetermined shooting rules.” ¶ [0012]: “Step 3: Perform the virtual shooting process: Move the physical camera model, and adjust the virtual camera's pose relative to the 3D model of the predetermined shooting scene according to the changes in the physical camera model's position and posture. That is, the movement trajectory of the physical camera model and the virtual camera's movement trajectory in the 3D model of the scene are consistent. In the 3D model of the predetermined shooting scene, based on the changed position and posture of the virtual camera, perform a real-time preview of the simulated shooting effect. Record the movement trajectory of the virtual camera in the 3D model during each virtual shooting process.” ¶ [0058]: “Based on the simulated real shooting scene in the 3D animation software, and in accordance with the director's requirements, the camera's movement trajectory and posture in space during actual shooting are designed”) based on the recording of the temporal series of imaging parameters (¶ [0008]: “utilizes 3D virtual simulation to design shot effects that satisfy the creators, then automatically calculates the parameters of the proposed camera motion control system, precisely controlling the spatial coordinates of the camera lens” ¶ [0013]: “Step 4: Adjust the pose of the physical camera model and the initial position and posture of the virtual camera in the 3D model based on the simulated shooting effect; until the simulated shooting effect of the virtual camera in the 3D model meets the director's requirements; save the motion trajectory of the virtual camera in the 3D model during the virtual shooting process;” ¶ [0023]: “The display module includes: a virtual scene display module and a shooting effect preview module:” ¶ [0030]: “(1) This invention helps filmmakers to design and adjust shots using the camera motion control system more conveniently, increases the flexibility of shot design in the early stages of filming, and makes the camera control system more interactive.” ¶ [0045]: “Step 4: Adjust the pose of the physical camera model and the initial position and posture of the virtual camera in the 3D model based on the simulated shooting effect; until the simulated shooting effect of the virtual camera in the 3D model meets the director's requirements; save the motion trajectory of the virtual camera in the 3D model during the virtual shooting process;” ¶ [0046]: “Step 5: Based on the camera motion trajectory data obtained in Step 4 and the parameters and model of the camera motion control system to be used, calculate the joint pose data at each moment required to achieve the simulated shooting effect using the camera motion control system.” ¶ [0048]: “To calculate the pose data of each joint of the robot at each moment required for actual filming, only the pose data of the virtual camera at that moment is needed.”); and capturing the first series of individual images (¶ [0004]: “planning and creating films.” ¶ [0015]: “actual shooting”) with the recording of the temporal series of imaging parameters (¶ [0014]: “Step 5: Based on the camera motion trajectory data obtained in Step 4 and the parameters and model of the camera motion control system to be used, calculate the joint pose data at each moment required to achieve the simulated shooting effect using the camera motion control system.” ¶ [0015]: “Step 6: The obtained posture data of each joint of the camera motion control system at each moment is used to operate the camera motion control system during actual shooting.” ¶ [0049]: “Step 6: The obtained posture data of each joint of the camera motion control system at each moment is used to operate the camera motion control system during actual shooting.” ¶ [0059]: “After obtaining the virtual camera motion pose data in the virtual scene of the 3D animation software, the inverse kinematics method of redundant robots is used to calculate the posture of the camera motion control system model (referring to the multi-rigid-body coupling system of the real camera-controlled robot), that is, the motion pose data of each joint of the robot, which is also the posture data of the camera motion control system, and is used to operate the camera motion control system for real shooting.”). LIU fails to explicitly disclose: creating a second series of individual images with a first series of individual images; determining properties of a light ray within the lens; and creating or adapting the individual images of the second series of individual images taking account of properties of the light ray within the lens of a respective individual image of the first series of individual images. Nevertheless, whereas LIU may not be entirely explicit as to, NATTRESS clearly teaches: a method for creating a second series of individual images (e.g., “computer-generated 3D images”) with a first series of individual images (e.g., “real images”) (ABSTRACT: “A method for producing composite images of real images and computer-generated 3D images uses camera-and-lens sensor data. The real images can be live, or pre-recorded, and may originate on film or video. The computer-generated 3D images are generated live, simultaneously with the film or video and can be animated or still based upon pre-prepared 3D data.”), the method comprising: determining properties (¶ [0017]: “to accurately simulate the real camera in terms of optical qualities such as position, orientation and focus, aperture and depth of field.” ¶ [0026]: “If the computer simulation of a virtual camera is capable of simulating lens distortions then the lens information from the camera data can be used as parameters in the simulation of the virtual camera, otherwise the image processing techniques can be used.”) for the individual images (¶ [0016]: “each frame of video”) of the first series of individual images (¶ [0016]: “each frame of video.”) (¶ [0005]: “the use of lens sensor information”; ¶ [0006]: “computer simulation of the lens,” ¶ [0007]: “accurate computer graphic representations of depth of field and focus,” ¶ [0008]: “and accurate geometrical correspondence by taking into account the movements of the individual lens elements inside the camera.” ¶ [0016]: “A camera 1 such as a film, video, or high-definition video camera can be fitted with sensors 2 as part of the lens 3. The lens sensors 2 can produce a digital signal 4 that represents the positions of the lens elements they are sensing. Additional position and orientation sensors 5 on the camera itself can reference their positions to a fixed reference point 6 (shown in FIG. 4) not attached to the camera. The camera sensors also produce a digital signal 7, which is later combined at a combination module 8 with the lens sensor signal to be transmitted from a transmission unit 9 to a computer system 10 as shown in FIG. 2. The camera itself records the image presented to it, for example, via videotape 11, and can also transmit from an output 12 (via cable or other means) the video image to a compositing 13 or monitoring 14 apparatus. The camera also generates a time code 15 which it uniquely assigns to each frame of video using an assignment module 16. Assigning the same timecode to the set of collected sensor data recorded at the same time produces meta-data 17 of the camera image.”); and creating (¶ [0023]: “produce graphics 40,”) or adapting (NOTE: Since the alternative limitation (i.e., “creating”) has been met, this limitation (i.e., “adapting”) is not required to be given any patentable weight.) the individual images of the second series of individual images taking account of properties (¶ [0017]: “This meta-data can then be transmitted from an output 18 to a computer system (by cable, wireless or other means) where processing can take place that will convert the meta-data into camera data 19. The camera data is used by 3D computer graphics software 20 or compositing application 21 (as shown in FIG. 2) to allow the systems to accurately simulate the real camera in terms of optical qualities such as position, orientation and focus, aperture and depth of field.” ¶ [0023]: “In real time, 3D computer graphics techniques can display a pre-prepared or generated animation or scene 37. The virtual camera 38 used in the 3D techniques uses the accurate information from the camera data to allow it to produce graphics 40, as shown in block 39, which correspond to the video images in terms of position, orientation and perspective, field of view, focus, and depth of field--the optical qualities.” ¶ [0003]: “It is desirable for a good-looking virtual set that there is an accurate dynamic link between the camera recording the actors and the computer generating the 3D graphics. It is preferred that the computer receives data indicating precisely where the camera is, which direction it is pointing, and what the status of the lens focus, zoom and aperture is for every frame of video recorded. This ensures that the perspective and view of the virtual set is substantially the same as that of the video of the actor that is being placed into the virtual set, and that when the camera moves, there is synchronization between the real camera move and the view of the virtual set.” ¶ [0026]: “Lens distortion, where the video image recorded by the camera appears distorted due to the particular lenses being used by the camera, can also be applied to the computer graphics using image-based processing techniques. Computer graphics generally do not exhibit any lens distortion because a lens is not used in their production. The computer simulation of a virtual camera will generally not produce lens distortions. If the computer simulation of a virtual camera is capable of simulating lens distortions then the lens information from the camera data can be used as parameters in the simulation of the virtual camera, otherwise the image processing techniques can be used.” ¶ [0028]: In the first case, the video images have lens distortion caused by the lenses used in the camera, and an equivalent distortion in terms of nature and amount are calculated from the camera data and applied to the computer graphics via the image-based processing.” ¶ [0001]: “The invention relates to producing a series of generated images in response to data from a camera/lens system in such a way that the generated images match the visual representation resulting from the data parameters. The optical qualities of the generated images are similar to the optical qualities of the images resulting from the camera/lens system. Optical qualities that may be modified according to the present invention include qualities such as depth of field, focus, t-stop (exposure), field of view and perspective.”). Thus, in order to obtain a more versatile imaging system having the cumulative features and/or functionalities taught by LIU and NATTRESS, it would have been obvious to one of ordinary skill in the art to have modified the method of creating a first series of images taught by LIU so as to also incorporate creating or adapting the individual images of a second series of individual images taking account of properties of the lens of a respective individual image of the first series of individual images, as taught by NATTRESS. NATTRESS discloses simulating the camera lens. However, although arguably inherent in simulating the camera lens, NATTRESS does not explicitly disclose: “determining properties of a light ray within the lens” and “taking account of properties of the light ray within the lens.” However, whereas LIU and NATTRESS may not be explicit as to, CONNELL plainly teaches that simulating a lens/lenses of a camera includes: determining properties of a light ray within the lens (e.g., ¶ [0065]: “obtaining a point spread function of the one or more lenses”; ¶ [0066]: “generating a look up table”; ¶ [0031]: “point spread function 152 of one or more lenses,” ¶ [0032]: “point spread function 152”; ¶ [0032]: “look up table 172.”) (¶ [0065]: “With reference to FIG. 6A, at 604, the method 600 may include, at a precomputing stage, obtaining a point spread function of the one or more lenses.” ¶ [0066]: “At 616, the method 600 may include, based on ray tracing the first input raster image, generating a look up table by computing a contribution to a pixel in the first output image, wherein the contribution is from a pixel at each location of a subset of locations in the first input raster image.” ¶ [0067]: “At 624, the method 600 may include, at a runtime stage, obtaining a second input raster image comprising a plurality of pixels. At 628, the method 600 may include using the look up table to generate a second output image from the second input raster image.” ¶ [0069]: “With reference to FIG. 6B, at 652, the method 600 may include obtaining a third input raster image. At 656, the method 600 may include, wherein obtaining the second input raster image and the third input raster image comprises obtaining the second input raster image and the third input raster image at a frame rate. At 660, the method 600 may include using the look up table to generate a third output image from the third input raster image. At 664, the method 600 may include, wherein generating the second output image and the third output image comprises generating the second output image and the third output image at approximately the frame rate.” Abstract: “methods for simulating light passing through one or more lenses.” Abstract: “method comprises obtaining a point spread function of the one or more lenses, obtaining a first input raster image comprising a plurality of pixels, and ray tracing the first input raster image using the point spread function to generate a first output image. Based on ray tracing the first input raster image, a look up table is generated by computing a contribution to a pixel in the first output image, wherein the contribution is from a pixel at each location of a subset of locations in the first input raster image. A second input raster image is obtained, and the look up table is used to generate a second output image from the second input raster image.” ¶ [0001]: “Ray tracing can generate accurate images by calculating how light travels through optical systems, such as lenses” ¶ [0004]: “The method generates a look up table based on ray tracing the first input raster image by computing a contribution to a pixel in the first output image, wherein the contribution is from a pixel at each location of a subset of locations in the first input raster image. At a runtime stage, a second input raster image comprising a plurality of pixels is obtained, and using the look up table the method generates a second output image from the second input raster image.” ¶ [0015]: “Generating images of real or simulated scenes also may be useful in a variety of other examples, including cinematography and gaming.” ¶ [0016]: “In some examples, effects of viewing an image via a lens, prism, mirror or other optical hardware may be simulated using reference physical parameters of the optical hardware. For example, a simulated view through a lens may be generated by ray tracing how light will travel inside the lens based on given parameters. For example, the parameters may include a point spread function describing how light spreads when it passes through the lens.” ¶ [0017]: “Ray tracing using parameters such as a point spread function may generate accurate simulations of optical systems.” ¶ [0018]: “Accordingly, examples are disclosed that relate to computing devices and methods for simulating light passing through one or more lenses.” ¶ [0031]: “In some examples involving an image capture device, the developer also may describe one or more optical systems of the image capture device so that images of the room 200 may be generated with proper distortions to accurately simulate images captured via the image capture device. For example, the developer may provide a point spread function 152 of one or more lenses, as described in more detail below regarding FIG. 4.” ¶ [0039]: “With reference now to FIG. 4, and as mentioned above, it may be desirable to simulate distortions introduced into one or more images by an optical system, such as lenses of the image sensor(s) of HMD devices. As described above, distortions and aberrations introduced by such optical systems may be modeled using a point spread function 152 that describes how light spreads when it passes through a lens.” ¶ [0040]: “FIG. 4 illustrates one example of a point spread function 400 of a lens. In the example of FIG. 4, the point spread function 400 is modeled as a Gaussian function with a standard deviation of 0.5 that illustrates a relative intensity 404 of light focused at a pixel as it spreads out, or blurs, after passing through the lens. The x-axis of the point spread function 400 is the relative position 408 to the pixel along an axis of an output image 140.”) and creating or adapting the individual images of the second series of individual images (e.g., ¶ [0032]: “generate a first output image 140.” ¶ [0034]: “a second output image, third output image and/or additional output images may be generated”) taking account of properties of the light ray within the lens (e.g., ¶ [0032]: “using the point spread function 152”; ¶ [0034]: “using the look up table 172.”) (¶ [0032]: “Using the point spread function 152, the computing device 104 may utilize a ray tracer 156 to ray trace a first input raster image 132 and generate a first output image 140. Next, the computing device 104 may generate a look up table 172 based on ray tracing the first input raster image 132. For example, the first input raster image 132 may comprise a plurality of pixels 160. Each pixel of the plurality of pixels 160 may have a discrete location 164 within the image and a color value 168.” ¶ [0034]: “As described in more detail below, the computing device 104 may receive additional input raster images for processing, such as a second input raster image, third input raster image, etc. Accordingly, and in one potential advantage of the present disclosure, a second output image, third output image and/or additional output images may be generated, respectively, using the look up table 172.” ¶ [0043]: “With reference again to FIG. 1 and as described above, using the point spread function 152 a first input raster image 132 may be ray traced to generate a first output image 140.” ¶ [0044]: “The ray-tracer 156 may project each pixel of the first input raster image 132 on a first side of a simulated lens onto a first output image 140 on an opposing side of the simulated lens by calling the point spread function 152 for every point.” ¶ [0065]: “At 608, the method 600 may include obtaining a first input raster image comprising a plurality of pixels. At 612, the method 600 may include ray tracing the first input raster image using the point spread function to generate a first output image.” ¶ [0066]: “At 616, the method 600 may include, based on ray tracing the first input raster image, generating a look up table by computing a contribution to a pixel in the first output image, wherein the contribution is from a pixel at each location of a subset of locations in the first input raster image. At 620, the method 600 may include, wherein the subset of locations comprises locations of at least 512 different pixels.” ¶ [0067]: “At 624, the method 600 may include, at a runtime stage, obtaining a second input raster image comprising a plurality of pixels. At 628, the method 600 may include using the look up table to generate a second output image from the second input raster image.” ¶ [0020]: “For example, in FIG. 1, the user computing device 120 or the HMD device 124 may request that computing device 104 process a plurality of images to simulate the effect of viewing the images through one or more lenses or other optical system.” ¶ [0021]: “The computing device 104 may comprise a rasterizer 128, which may convert each image of the plurality of images into an input raster image 132. In other examples, the user computing device 120 may select a plurality of input raster images 132 already stored by the computing device 104.” ¶ [0022]: “In yet another example, discussed in more detail below, the user computing device 120 may request that computing device 104 generate a plurality of input raster images 132 from a model environment 136. The model environment 136 may comprise one or more predetermined three-dimensional worlds from which environment images may be generated.” ¶ [0027]: “With reference again to FIG. 1, the rasterizer 128 may generate one or more input raster images 132 that simulate a view of the room 200 from a perspective of the HMD device 208 at given positions and orientations along the path 204. In different examples, the rasterizer 128 may generate the input raster images 132 for a computer-generated film, video game, virtual or mixed reality application, etc.” ¶ [0034]: “As described in more detail below, the computing device 104 may receive additional input raster images for processing, such as a second input raster image, third input raster image, etc. Accordingly, and in one potential advantage of the present disclosure, a second output image, third output image and/or additional output images may be generated, respectively, using the look up table 172.”). Thus, in order to accurately simulate a camera lens for compositing real and virtual images, it would have been obvious to one of ordinary skill in the art to have modified the system taught by the combination of LIU and NATRESS so as to simulate the camera lens by determining properties of a light ray within the camera lens recording a first image to create or adapt a second image taking account of properties of the light ray within the lens, as taught by CONNELL. Regarding claim 2 (depends on claim 1), whereas LIU may not be explicit as to, NATTRESS further teaches: wherein determining properties of the light ray within the lens comprises determining an entrance pupil (e.g., ¶ [0056]: “A Zoom meta-data value of 84245 also corresponds to a nodal point calibration of 282.87 mm. This is the distance from CCD to the nodal point. The nodal point is also called the entrance pupil. It is where all incoming rays converge in the lens and it is where the true camera position lies. The nodal point is not fixed in space relative to the rest of the camera, but changes as the zoom of the lens changes.”) and a field of view of the lens (e.g., ¶ [0001]: “field of view”; ¶ [0017]: “depth of field”; ¶ [0023]: “field of view”; ¶ [0055]: “A Zoom meta-data value of 84245 corresponds to a field of view of the lens (FOV) of 13.025 degrees.”; ¶ [0057]: “field of view”) for the individual images of the first series of individual images (e.g., ¶ [0003]: “what the status of the lens focus, zoom and aperture is for every frame of video recorded”) (¶ [0016]: “The camera itself records the image presented to it, for example, via videotape 11, and can also transmit from an output 12 (via cable or other means) the video image to a compositing 13 or monitoring 14 apparatus. The camera also generates a time code 15 which it uniquely assigns to each frame of video using an assignment module 16. Assigning the same timecode to the set of collected sensor data recorded at the same time produces meta-data 17 of the camera image.” ¶ [0017]: “This meta-data can then be transmitted from an output 18 to a computer system (by cable, wireless or other means) where processing can take place that will convert the meta-data into camera data 19. The camera data is used by 3D computer graphics software 20 or compositing application 21 (as shown in FIG. 2) to allow the systems to accurately simulate the real camera in terms of optical qualities such as position, orientation and focus, aperture and depth of field.” ¶ [0023]: “In real time, 3D computer graphics techniques can display a pre-prepared or generated animation or scene 37. The virtual camera 38 used in the 3D techniques uses the accurate information from the camera data to allow it to produce graphics 40, as shown in block 39, which correspond to the video images in terms of position, orientation and perspective, field of view, focus, and depth of field--the optical qualities.” ¶ [0057]: “An advantage of generating the 3D computer graphics in real time is that animations can be stored in the system as well as a virtual set. By triggering the playback of an animation manually or at a specific time-code the animation can be generated so that it is produced in synchronization with the camera video, thus allowing complex special effects shots to be previewed during production. Later, in the post production phase, the animations will be rendered at a high quality, using the camera data recorded during production to ensure an accurate visual match between the recorded video and the rendered animation in terms of position, orientation, perspective, field of view, focus, and depth of field.” ¶ [0003]: “It is desirable for a good-looking virtual set that there is an accurate dynamic link between the camera recording the actors and the computer generating the 3D graphics. It is preferred that the computer receives data indicating precisely where the camera is, which direction it is pointing, and what the status of the lens focus, zoom and aperture is for every frame of video recorded. This ensures that the perspective and view of the virtual set is substantially the same as that of the video of the actor that is being placed into the virtual set, and that when the camera moves, there is synchronization between the real camera move and the view of the virtual set.” ¶ [0001]: “The invention relates to producing a series of generated images in response to data from a camera/lens system in such a way that the generated images match the visual representation resulting from the data parameters. The optical qualities of the generated images are similar to the optical qualities of the images resulting from the camera/lens system. Optical qualities that may be modified according to the present invention include qualities such as depth of field, focus, t-stop (exposure), field of view and perspective.”); and/or wherein creating or adapting the individual images of the second series of individual images is effected taking account of the entrance pupil (¶ [0056]: “the entrance pupil”) and the field of view of the lens (¶ [0023]: “field of view,” ¶ [0055]: “a field of view of the lens (FOV)”) of the respective individual image (¶ [0016]: “meta-data 17 of the camera image.”) of the first series of individual images (¶ [0031]: “Each line of meta-data represents what is happening to the lens and camera at an instance of time, which is specified by the timecode.” ¶ [0032]: “Timecode refers to the time a frame of video or film is recorded at. The four numbers represent hours, minutes, seconds and frames.”) (¶ [0016]: “The camera itself records the image presented to it, for example, via videotape 11, and can also transmit from an output 12 (via cable or other means) the video image to a compositing 13 or monitoring 14 apparatus. The camera also generates a time code 15 which it uniquely assigns to each frame of video using an assignment module 16. Assigning the same timecode to the set of collected sensor data recorded at the same time produces meta-data 17 of the camera image.” ¶ [0017]: “This meta-data can then be transmitted from an output 18 to a computer system (by cable, wireless or other means) where processing can take place that will convert the meta-data into camera data 19. The camera data is used by 3D computer graphics software 20 or compositing application 21 (as shown in FIG. 2) to allow the systems to accurately simulate the real camera in terms of optical qualities such as position, orientation and focus, aperture and depth of field.” ¶ [0023]: “In real time, 3D computer graphics techniques can display a pre-prepared or generated animation or scene 37. The virtual camera 38 used in the 3D techniques uses the accurate information from the camera data to allow it to produce graphics 40, as shown in block 39, which correspond to the video images in terms of position, orientation and perspective, field of view, focus, and depth of field--the optical qualities.” ¶ [0055]: “A Zoom meta-data value of 84245 corresponds to a field of view of the lens (FOV) of 13.025 degrees.” ¶ [0056]: “A Zoom meta-data value of 84245 also corresponds to a nodal point calibration of 282.87 mm. This is the distance from CCD to the nodal point. The nodal point is also called the entrance pupil. It is where all incoming rays converge in the lens and it is where the true camera position lies. The nodal point is not fixed in space relative to the rest of the camera, but changes as the zoom of the lens changes. Again, the focus distance is from the CCD to the object in the focal plane, whereas in this particular computer simulation of the lens, the focus distance is from the point in space that represents the camera.”¶ [0003]: “It is desirable for a good-looking virtual set that there is an accurate dynamic link between the camera recording the actors and the computer generating the 3D graphics. It is preferred that the computer receives data indicating precisely where the camera is, which direction it is pointing, and what the status of the lens focus, zoom and aperture is for every frame of video recorded. This ensures that the perspective and view of the virtual set is substantially the same as that of the video of the actor that is being placed into the virtual set, and that when the camera moves, there is synchronization between the real camera move and the view of the virtual set.”). Thus, in order to obtain a more versatile imaging system having the cumulative features and/or functionalities taught by LIU, NATTRESS and CONNELL, it would have been obvious to one of ordinary skill in the art to have further modified the method of creating a first series of images taught by LIU so as to also incorporate creating or adapting the individual images of a second series of individual images taking account of properties of the lens of a respective individual image of the first series of individual images, as taught by NATTRESS. Regarding claim 3 (depends on claim 2), whereas LIU and NATTRESS may not be explicit as to, CONNELL further teaches: wherein creating or adapting (e.g., ¶ [0032]: “generate a first output image 140.” ¶ [0034]: “a second output image, third output image and/or additional output images may be generated, respectively,”) comprises performing a point spread function (¶ [0032]: “Using the point spread function 152,”; ¶ [0034]: “using the look up table 172.”) and/or an optical transfer function and/or a ray function (¶ [0032]: “Using the point spread function 152, the computing device 104 may utilize a ray tracer 156 to ray trace a first input raster image 132 and generate a first output image 140. Next, the computing device 104 may generate a look up table 172 based on ray tracing the first input raster image 132. For example, the first input raster image 132 may comprise a plurality of pixels 160. Each pixel of the plurality of pixels 160 may have a discrete location 164 within the image and a color value 168.” ¶ [0034]: “As described in more detail below, the computing device 104 may receive additional input raster images for processing, such as a second input raster image, third input raster image, etc. Accordingly, and in one potential advantage of the present disclosure, a second output image, third output image and/or additional output images may be generated, respectively, using the look up table 172.” ¶ [0043]: “With reference again to FIG. 1 and as described above, using the point spread function 152 a first input raster image 132 may be ray traced to generate a first output image 140.” ¶ [0044]: “The ray-tracer 156 may project each pixel of the first input raster image 132 on a first side of a simulated lens onto a first output image 140 on an opposing side of the simulated lens by calling the point spread function 152 for every point.” ¶ [0065]: “At 608, the method 600 may include obtaining a first input raster image comprising a plurality of pixels. At 612, the method 600 may include ray tracing the first input raster image using the point spread function to generate a first output image.” ¶ [0066]: “At 616, the method 600 may include, based on ray tracing the first input raster image, generating a look up table by computing a contribution to a pixel in the first output image, wherein the contribution is from a pixel at each location of a subset of locations in the first input raster image. At 620, the method 600 may include, wherein the subset of locations comprises locations of at least 512 different pixels.” ¶ [0067]: “At 624, the method 600 may include, at a runtime stage, obtaining a second input raster image comprising a plurality of pixels. At 628, the method 600 may include using the look up table to generate a second output image from the second input raster image.” ¶ [0020]: “For example, in FIG. 1, the user computing device 120 or the HMD device 124 may request that computing device 104 process a plurality of images to simulate the effect of viewing the images through one or more lenses or other optical system.” ¶ [0021]: “The computing device 104 may comprise a rasterizer 128, which may convert each image of the plurality of images into an input raster image 132. In other examples, the user computing device 120 may select a plurality of input raster images 132 already stored by the computing device 104.” ¶ [0022]: “In yet another example, discussed in more detail below, the user computing device 120 may request that computing device 104 generate a plurality of input raster images 132 from a model environment 136. The model environment 136 may comprise one or more predetermined three-dimensional worlds from which environment images may be generated.” ¶ [0027]: “With reference again to FIG. 1, the rasterizer 128 may generate one or more input raster images 132 that simulate a view of the room 200 from a perspective of the HMD device 208 at given positions and orientations along the path 204. In different examples, the rasterizer 128 may generate the input raster images 132 for a computer-generated film, video game, virtual or mixed reality application, etc.” ¶ [0034]: “As described in more detail below, the computing device 104 may receive additional input raster images for processing, such as a second input raster image, third input raster image, etc. Accordingly, and in one potential advantage of the present disclosure, a second output image, third output image and/or additional output images may be generated, respectively, using the look up table 172.”). Thus, in order to simulate a camera lens for compositing real and virtual images, it would have been obvious to one of ordinary skill in the art to have modified the system taught by the combination of LIU, NATTRESS and CONNELL to as to simulate the camera lens by determining properties of a light ray within the camera lens recording a first image to create or adapt a second image taking account of properties of the light ray within the lens, as further taught by CONNELL. Regarding claim 4 (depends on claim 1), whereas LIU may not be explicit as to, NATTRESS further teaches: combining an image content of the first series of individual images and the image content of the second series of individual images (¶ [0024]: “The computer graphic images are displayed on a monitor 41, as shown in FIG. 2, and also transmitted 42 to a video monitor or compositing apparatus. The compositing apparatus can display a composite image of the video from the camera and the corresponding computer graphics generated by the 3D computer graphics techniques using the information from the camera data.” ), wherein the combining comprises adapting the image content of the individual images of the second series of individual images to the image content of the individual images of the first series of individual images (¶ [0023]: “In real time, 3D computer graphics techniques can display a pre-prepared or generated animation or scene 37. The virtual camera 38 used in the 3D techniques uses the accurate information from the camera data to allow it to produce graphics 40, as shown in block 39, which correspond to the video images in terms of position, orientation and perspective, field of view, focus, and depth of field--the optical qualities.” ¶ [0025]: “Image-based processing 43 of the computer graphics can be used to enhance the alignment between the computer graphics and the recorded video.” ¶ [0026]: “Lens distortion, where the video image recorded by the camera appears distorted due to the particular lenses being used by the camera, can also be applied to the computer graphics using image-based processing techniques. Computer graphics generally do not exhibit any lens distortion because a lens is not used in their production. The computer simulation of a virtual camera will generally not produce lens distortions. If the computer simulation of a virtual camera is capable of simulating lens distortions then the lens information from the camera data can be used as parameters in the simulation of the virtual camera, otherwise the image processing techniques can be used.” ¶ [0027]: “Lens distortion varies as the lens elements move inside the camera. By using the lens information from the camera data, the correct nature and amount of lens distortion can be calculated and made to vary with any adjustments to the lens elements in the camera. Similarly, an inverse lens distortion can also be calculated. An inverse distortion is an image based process such that applying it will remove the lens distortion present in the image. To ensure an accurate visual match between the video images and the computer graphics, either the lens distortion from the video images can be applied to the computer graphics, or the lens distortion can be removed from the video images.” ¶ [0028]: “In the first case, the video images have lens distortion caused by the lenses u
Read full office action

Prosecution Timeline

Sep 25, 2023
Application Filed
Feb 27, 2025
Non-Final Rejection — §103, §DP
Aug 22, 2025
Response Filed
Dec 10, 2025
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592017
Rendering XR Avatars Based on Acoustical Features
2y 5m to grant Granted Mar 31, 2026
Patent 12586282
AVATAR COMMUNICATION
2y 5m to grant Granted Mar 24, 2026
Patent 12555314
THREE-DIMENSIONAL SHADING METHOD, APPARATUS, AND COMPUTING DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12555296
ADAPTING SIMULATED CHARACTER INTERACTIONS TO DIFFERENT MORPHOLOGIES AND INTERACTION SCENARIOS
2y 5m to grant Granted Feb 17, 2026
Patent 12541913
METHOD AND APPARATUS FOR REBUILDING RELIGHTABLE IMPLICIT HUMAN BODY MODEL
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
90%
With Interview (+20.2%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 382 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month