Prosecution Insights
Last updated: April 19, 2026
Application No. 18/666,433

RENDERING METHOD AND DEVICE

Non-Final OA §103
Filed
May 16, 2024
Examiner
LE, MICHAEL
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
88%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
568 granted / 864 resolved
+3.7% vs TC avg
Strong +22% interview lift
Without
With
+22.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
61 currently pending
Career history
925
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
52.7%
+12.7% vs TC avg
§102
13.4%
-26.6% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 864 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Information Disclosure Statement 2. The information disclosure statements (IDS) submitted on the following dates are in compliance with the provisions of 37 CFR 1.97 and are being considered by the Examiner: 05/16/2024. Claim Objections 3. Claim 1, Line 2 objected to because of the following informalities: The minor typographical error "... a target view Appropriate correction is required. Claim Rejections - 35 USC § 103 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 5. Claims 1-2 and 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al., (“Chen”) [US-2009/0110239-A1] in view of Kosiorek et al. (“Kosiorek”) [US-2024/0070972-A1] Regarding claim 1, Chen discloses a rendering method (Chen- Abstract, at least discloses system and method for identifying objects in an image dataset that occlude other objects and for transforming the image dataset to reveal the occluded objects) comprising: obtaining a target image corresponding to a target view based on by inputting first parameter information corresponding to the target view (Chen- Fig. 1A shows a photograph of a street locality showing a shop sign partially occluded by a street sign [a target image] with the first viewpoint; ¶0020, at least discloses a photograph of street scene with buildings adjoining the street. In the view of FIG. 1A, a street sign 100 partially blocks the view of a sign 102 on one of the shops next to the street [obtaining a target image corresponding to a target view]; Fig 2A and ¶0014, at least disclose an overhead schematic of a vehicle collecting images in a street locality; in this view, a doorway is partially occluded by a signpost [target view]; Fig 2A and ¶0026, at least disclose 206 shows the point-of-view [first parameter information] of just one camera in the image-capture system 204. As the vehicle 200 passes the building 208, this camera (and others, not shown) image various areas of the building 208. Located in front of the building 208 are “occluding objects,” here represented by signposts 210. These are called “occluding objects” because they hide (or “occlude”) whatever is behind them. In the example of FIG. 2A, from the illustrated point-of-view 206 [first parameter information], the doorway 212 of the building 208 [target view] is partially occluded by one of the signposts 210); obtaining an adjacent view that satisfies a predetermined condition with respect to the target view (Chen- Fig. 1B shows a photograph of the same locality as shown FIG. 1A but taken from a different viewpoint [an adjacent view] where the street sign does not occlude the same portion of the shop sign as occluded in FIG. 1A [a target image]; Fig 1B and ¶0021, at least disclose FIG. 1B is another photograph [an adjacent view], taken from a slightly different point of view from that of FIG. 1A [a predetermined condition with respect to the target view]. Comparing FIGS. 1A and 1B, the foreground street sign 100 has “moved” relative to the background shop sign 102. Because of this parallax “movement,” the portion of the shop sign 102 that is blocked in FIG. 1A is now clearly visible in FIG. 1B. (Also, a portion of the shop sign 102 that is visible in FIG. 1A is blocked in FIG. 1B.)); obtaining an adjacent image corresponding to the adjacent view by inputting second parameter information (Chen- Fig. 1B shows a photograph of the same locality as shown FIG. 1A but taken from a different viewpoint [an adjacent view] where the street sign does not occlude the same portion of the shop sign as occluded in FIG. 1A [a target image]; Fig 1B and ¶0021, at least disclose FIG. 1B is another photograph [an adjacent view], taken from a slightly different point of view [second parameter information] from that of FIG. 1A. Comparing FIGS. 1A and 1B, the foreground street sign 100 has “moved” relative to the background shop sign 102. Because of this parallax “movement,” the portion of the shop sign 102 that is blocked in FIG. 1A is now clearly visible in FIG. 1B. (Also, a portion of the shop sign 102 that is visible in FIG. 1A is blocked in FIG. 1B.); ); and obtaining a final image by correcting the target image based on the adjacent image (Chen- Fig. 1C shows a photograph showing the same view as in FIG. 1A but post-processed to remove the portion of the street sign that occludes the shop sign and replace the vacant space with the missing portion of the shop sign; Fig. 1C and ¶0022-0023, at least disclose the viewpoint is the same as in FIG. 1A, but the processed image of FIG. 1C shows the entire shop sign 102. When the resulting image dataset is viewed, the processed image of FIG. 1C effectively replaces the image of FIG. 1A, thus rendering the shop sign 102 completely visible […] only the portion of the street sign 100 that blocks the shop sign 102 is removed: Most of the street sign 100 is left in place. This is meant to clearly illustrate the removal of the blocking portion of the street sign 100 […] the blocking portion of the street sign 100 is left in place but is visually “de-emphasized” (e.g., rendered semi-transparent) so that the shop sign 102 is revealed behind it [Wingdings font/0xE0] suggests the partial portion of shop sign 102 being blocked by street sign in Fig. 1A in the target image (the target image) being corrected by removing of the blocking portion of the street sign 100 which blocked the right portion of the shop sign 102 shown in Fig. 1B (the adjacent image)). Chen does not explicitly disclose, but Kosiorek discloses a neural scene representation (NSR) model (Kosiorek- Fig. 2 and ¶0070, at least disclose FIG. 2 illustrates an example of volumetric rendering of an image 215 of a scene 250 using a radiance field. The volumetric rendering can be used by an image rendering system (e.g., the system 100 in FIG. 1 ) to render an image 215 depicting the scene 250 from a perspective of a camera at a new camera location 216. In particular, the image rendering system can use a scene representation neural network 240 [a neural scene representation (NSR) model], defining a geometric model of the scene as a three-dimensional radiance field 260, to render the image 215). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Chen to incorporate the teachings of Kosiorek, and apply the geometric model of the scene as a three-dimensional radiance field into Chen’s teachings for obtaining a target image corresponding to a target view based on by inputting first parameter information corresponding to the target view to a neural scene representation (NSR) model; obtaining an adjacent view that satisfies a predetermined condition with respect to the target view; obtaining an adjacent image corresponding to the adjacent view by inputting second parameter information corresponding to the adjacent view to the NSR model; and obtaining a final image by correcting the target image based on the adjacent image. Doing so would reduce the amount of computation and training needed as the scene representation neural network does not also have to learn how to render an image. Kosiorek further discloses obtaining a target image corresponding to a target view based on by inputting first parameter information corresponding to the target view to a neural scene representation (NSR) model (Kosiorek- Fig. 2 and ¶0070, at least disclose volumetric rendering of an image 215 of a scene 250 [obtaining a target image corresponding to a target view] using a radiance field. The volumetric rendering can be used by an image rendering system (e.g., the system 100 in FIG. 1 ) to render an image 215 depicting the scene 250 from a perspective of a camera at a new camera location 216 [inputting first parameter information]. In particular, the image rendering system can use a scene representation neural network 240 [a neural scene representation model], defining a geometric model of the scene as a three-dimensional radiance field 260, to render the image 215); obtaining an adjacent image corresponding to the adjacent view by inputting second parameter information corresponding to the adjacent view to the NSR model (Kosiorek- Fig. 2 and ¶0077, at least disclose the system can use the neural network 240 conditioned on the latent variable representing the scene 250 to render another new image 220 of the scene 250 [adjacent image corresponding to the adjacent view] from the perspective of the camera at a completely different camera location 225 (e.g., illustrated as being perpendicular to the camera location 216) [second parameter information]). Regarding claim 2, Chen in view of Kosiorek, discloses the rendering method of claim 1, and further discloses wherein the obtaining the final image (see Claim 1 rejection for detailed analysis) comprises: detecting an occlusion area in the target image (Chen- Fig. 1A shows a photograph of a street locality showing a shop sign partially occluded by a street sign [a target image] with the first viewpoint; ¶0020, at least discloses a photograph of street scene with buildings adjoining the street. In the view of FIG. 1A, a street sign 100 partially blocks the view of a sign 102 on one of the shops next to the street [obtaining a target image corresponding to a target view]; Fig 2A and ¶0014, at least disclose an overhead schematic of a vehicle collecting images in a street locality; in this view, a doorway is partially occluded by a signpost [target view]; Fig 2A and ¶0026, at least disclose 206 shows the point-of-view [first parameter information] of just one camera in the image-capture system 204. As the vehicle 200 passes the building 208, this camera (and others, not shown) image various areas of the building 208. Located in front of the building 208 are “occluding objects,” here represented by signposts 210. These are called “occluding objects” because they hide (or “occlude”) whatever is behind them. In the example of FIG. 2A, from the illustrated point-of-view 206 [first parameter information], the doorway 212 of the building 208 [target view] is partially occluded by one of the signposts 210)); and correcting the occlusion area in the target image based on the adjacent image (Chen- Fig. 1C shows a photograph showing the same view as in FIG. 1A but post-processed to remove the portion of the street sign that occludes the shop sign and replace the vacant space with the missing portion of the shop sign; Fig. 1C and ¶0022-0023, at least disclose the viewpoint is the same as in FIG. 1A, but the processed image of FIG. 1C shows the entire shop sign 102. When the resulting image dataset is viewed, the processed image of FIG. 1C effectively replaces the image of FIG. 1A, thus rendering the shop sign 102 completely visible […] only the portion of the street sign 100 that blocks the shop sign 102 is removed: Most of the street sign 100 is left in place. This is meant to clearly illustrate the removal of the blocking portion of the street sign 100 […] the blocking portion of the street sign 100 is left in place but is visually “de-emphasized” (e.g., rendered semi-transparent) so that the shop sign 102 is revealed behind it [Wingdings font/0xE0] suggests the partial portion of shop sign 102 being blocked by street sign in Fig. 1A in the target image (the target image) being corrected by removing of the blocking portion of the street sign 100 which blocked the right portion of the shop sign 102 shown in Fig. 1B (the adjacent image)). Regarding claim 13, Chen in view of Kosiorek, discloses a non-transitory computer-readable storage medium storing instructions (Chen- Fig. 3 and ¶0028-0029, at least disclose When the image dataset 300 is transformed or processed in some way, the resulting product is stored on the same computer-readable medium or on another one. The transformed image dataset 300, in whole or in part, may be distributed to users on a tangible medium, such as a CD, or may be transmitted over a network such as the Internet or over a wireless link to a user's mobile navigation system […] Many different applications can use this same image dataset 300. As one example, an image-based navigation application 302 allows a user to virtually walk down the street 202; Claim 17 at least cites “A computer-readable medium containing computer-executable instructions for a method for transforming an image dataset,”) that, when executed by a processor(Chen- Fig. 5 and ¶0040, at least disclose platform 500 runs one or more applications 502 including, for example, the image-based navigation application 302 as discussed above. For this application 302, the user navigates using tools provided by an interface 506 (such as a keyboard, mouse, microphone, voice recognition software, and the like); Kosiorek- ¶0135, at least discloses The term “data processing apparatus” refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers […] he apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them) , cause the processor to perform a method comprising: obtaining a target image corresponding to a target view by inputting first parameter information corresponding to the target view to a neural scene representation (NSR) model (see Claim 1 rejection for detailed analysis); obtaining an adjacent view corresponding to the target view (Chen- Fig. 1B shows a photograph of the same locality as shown FIG. 1A but taken from a different viewpoint [an adjacent view] where the street sign does not occlude the same portion of the shop sign as occluded in FIG. 1A [target view]; Fig 1B and ¶0021, at least disclose FIG. 1B is another photograph [an adjacent view], taken from a slightly different point of view from that of FIG. 1A [the target view]); obtaining an adjacent image corresponding to the adjacent view by inputting second parameter information corresponding to the adjacent view to the NSR model (see Claim 1 rejection for detailed analysis); and obtaining a final image by correcting the target image based on the adjacent image (see Claim 1 rejection for detailed analysis). The rendering device of claims 14-15 are similar in scope to the functions performed by the method of claims 1-2 and therefore claims 14-15 are rejected under the same rationale. Regarding claim 14, Chen in view of Kosiorek, discloses a rendering device (Chen- Fig. 3 and ¶0028, at least disclose the image-capture system 204 delivers its images to an image dataset 300) comprising: a memory configured to store instructions (Chen- Fig. 3 and ¶0028-0029, at least disclose When the image dataset 300 is transformed or processed in some way, the resulting product is stored on the same computer-readable medium or on another one. The transformed image dataset 300, in whole or in part, may be distributed to users on a tangible medium, such as a CD, or may be transmitted over a network such as the Internet or over a wireless link to a user's mobile navigation system […] Many different applications can use this same image dataset 300. As one example, an image-based navigation application 302 allows a user to virtually walk down the street 202; Claim 17 at least cites “A computer-readable medium containing computer-executable instructions for a method for transforming an image dataset,”), at least one processor configured to execute the instructions (Chen- Fig. 5 and ¶0040, at least disclose platform 500 runs one or more applications 502 including, for example, the image-based navigation application 302 as discussed above. For this application 302, the user navigates using tools provided by an interface 506 (such as a keyboard, mouse, microphone, voice recognition software, and the like); Kosiorek- ¶0135, at least discloses The term “data processing apparatus” refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers […] he apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them) to perform the method of claim 1. 6. Claims 3, 6, 16 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Kosiorek, further in view of Kheradmand et al. (“Kheradmand”) [US-12,354,337-B1] Regarding claim 3, Chen in view of Kosiorek, discloses the rendering method of claim 1, and further discloses wherein the obtaining the final image (see Claim 1 rejection for detailed analysis), comprises: correcting the target image based on the visibility (Chen- Fig. 1C shows a photograph showing the same view as in FIG. 1A but post-processed to remove the portion of the street sign that occludes the shop sign and replace the vacant space with the missing portion of the shop sign; Fig. 1C and ¶0022-0023, at least disclose the viewpoint is the same as in FIG. 1A, but the processed image of FIG. 1C shows the entire shop sign 102. When the resulting image dataset is viewed, the processed image of FIG. 1C effectively replaces the image of FIG. 1A, thus rendering the shop sign 102 completely visible […] only the portion of the street sign 100 that blocks the shop sign 102 is removed: Most of the street sign 100 is left in place. This is meant to clearly illustrate the removal of the blocking portion of the street sign 100 […] the blocking portion of the street sign 100 is left in place but is visually “de-emphasized” (e.g., rendered semi-transparent) so that the shop sign 102 is revealed behind it [Wingdings font/0xE0] suggests correcting the target image based on the visibility). The prior art does not explicitly disclose, but Kheradmand discloses obtaining a visibility map based on the target image and the adjacent image (Kheradmand- col 8, lines 3-21, at least discloses A visibility map may additionally be used to generate the appearance feature for each of the input images 540A-C. The pose of an input image can be determined from the input image and the target pose 544 can be determined based on a target image that shows the target pose 544. The visibility map is determined based on the pose and the target pose, where the visibility map indicates a region in the target image that is also available in the input image; col 11, lines 62-64, at least discloses the flow includes operation 1008, where the computer system determines a visibility map based on the first pose and the target pose. The visibility map can indicate a region in the target image that is also available in the first image). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Chen/Kosiorek to incorporate the teachings of Kheradmand, and apply the visibility map into Chen/Kosiorek’s teachings for obtaining a visibility map based on the target image and the adjacent image; and correcting the target image based on the visibility map. Doing so would provide technical improvements over conventional techniques for photo-realistic human image synthesis. Regarding claim 6, Chen in view of Kosiorek and Kheradmand, discloses the rendering method of claim 3, and further discloses wherein the correcting the target image (see Claim 1 rejection for detailed analysis) comprises: detecting an occlusion area in the target image based on the visibility map (Chen- Fig. 1A shows a photograph of a street locality showing a shop sign partially occluded by a street sign [a target image] with the first viewpoint; ¶0020, at least discloses a photograph of street scene with buildings adjoining the street. In the view of FIG. 1A, a street sign 100 partially blocks the view of a sign 102 on one of the shops next to the street [detecting an occlusion area in the target image]; Fig 2A and ¶0014, at least disclose an overhead schematic of a vehicle collecting images in a street locality; in this view, a doorway is partially occluded by a signpost [target view]; Kheradmand- col 3, lines 28-32, at least discloses the whole garment is never visible from a single view. Also, for many poses, different body parts occlude garments. So, using multiple views provides additional observations which enable the reconstruction of the entire texture of the garment in high fidelity); and correcting the occlusion area in the target image based on the adjacent image (Chen- Fig. 1C shows a photograph showing the same view as in FIG. 1A but post-processed to remove the portion of the street sign that occludes the shop sign and replace the vacant space with the missing portion of the shop sign; Fig. 1C and ¶0022-0023, at least disclose the viewpoint is the same as in FIG. 1A, but the processed image of FIG. 1C shows the entire shop sign 102. When the resulting image dataset is viewed, the processed image of FIG. 1C effectively replaces the image of FIG. 1A, thus rendering the shop sign 102 completely visible […] only the portion of the street sign 100 that blocks the shop sign 102 is removed: Most of the street sign 100 is left in place. This is meant to clearly illustrate the removal of the blocking portion of the street sign 100 […] the blocking portion of the street sign 100 is left in place but is visually “de-emphasized” (e.g., rendered semi-transparent) so that the shop sign 102 is revealed behind it [Wingdings font/0xE0] suggests the partial portion of shop sign 102 being blocked by street sign in Fig. 1A in the target image (the target image) being corrected by removing of the blocking portion of the street sign 100 which blocked the right portion of the shop sign 102 shown in Fig. 1B (the adjacent image)). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Chen/Kosiorek to incorporate the teachings of Kheradmand, and apply the occlude garments into Chen/Kosiorek’s teachings for detecting an occlusion area in the target image based on the visibility map. Doing so would provide technical improvements over conventional techniques for photo-realistic human image synthesis. The rendering device of claims 16 and 18 are similar in scope to the functions performed by the method of claims 3 and 6 and therefore claims 16 and 18 are rejected under the same rationale. 7. Claims 4-5 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Kosiorek, further in view of Kheradmand, still further in view of “Unsupervised Learning of Depth and Camera Pose with Feature Map Warping” by Guo et al., (“Guo”) Regarding claim 4, Chen in view of Kosiorek and Kheradmand, discloses the rendering method of claim 3, and further discloses wherein the obtaining the visibility map (see Claim 3 rejection for detailed analysis) comprises: obtaining a first warped image by warping the adjacent image to the target view (Kheradmand- Fig. 7 and col 7, lines 38-46, at least disclose Warping an input image can involve UV space warping 554 that generates a first warped image (e.g., warped image 774 in FIG. 7 ) by warping the input image in a UV space based on a pose of the input image and the target pose shown in a target image (e.g., target image 672 in FIG. 6 )); and obtaining the visibility map based on a difference between the first warped image and the target image (Kheradmand- Fig. 7 shows visibility map based on the first warped image 774 and the target pose 544; col 7, lines 40-46, at least discloses Warping an input image can involve UV space warping 554 that generates a first warped image (e.g., warped image 774 in FIG. 7 ) by warping the input image in a UV space based on a pose of the input image and the target pose shown in a target image (e.g., target image 672 in FIG. 6 )). The prior art does not explicitly disclose, but Guo discloses backward-warping the adjacent image (Guo- Fig 1 shows (a) DepthNet, loss function and warping. (b) MotionNet (c) MaskNet. It consists of the DepthNet for predicting depth map of the current frame It, the MotionNet for estimating egomotion from current frame It to adjacent frame If , and the MaskNet for generating occlusion-aware mask (OAM). The reconstructed current frame ˆIf and reconstructed current feature pyramid ˆFf are synthesized by warping; page 3, section 3. Method, 3.1. Preliminaries, 1st paragraph, at least discloses Our method uses single-view depth and multiview pose networks, with a loss based on warping the adjacent frames to the current frame using the computed depth and pose […] The warp process is to find the corresponding point in the adjacent frames through the depth map of the current frame and the camera egomotion, and then synthesize the current frame) It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Chen/Kosiorek/Kheradmand to incorporate the teachings of Guo, and apply warping the adjacent frames to the current frame into Chen/Kosiorek/Kheradmand’s teachings for obtaining a first warped image by backward-warping the adjacent image to the target view. Doing so would estimate depth and camera egomotion by minimizing photometric error between adjacent frames. Regarding claim 5, Chen in view of Kosiorek , Kheradmand and Guo, discloses the the rendering method of claim 4, and further discloses wherein the obtaining the visibility map based on the difference (see Claim 4 rejection for detailed analysis) comprises: obtaining a visibility value for a first pixel of the first warped image based on a difference between the first pixel of the first warped image and a second pixel of the target image corresponding to the first pixel (Kheradmand- col 8, lines 7-18, at least discloses The visibility map is determined based on the pose and the target pose, where the visibility map indicates a region in the target image that is also available in the input image. The weights associated with each appearance feature can be determined based on the visibility map. For instance, an input to another ML model may be generated based on the visibility map and the warped image, and the other ML model can output an indication of the weights; Guo- Fig 2 and page 3, section 3.3. Occlusion-Aware Mask, 1st paragraph, at least disclose As shown in Figure 2, the pixels in the yellow dash area are visible in the past frame It􀀀1 and current frame It but blocked by the vehicle in the next frame It+1. If the network predicts the correct depth of the pixels in the yellow dashed area in current frame, then the corresponding occluded area in the next frame does not match the current frame). The rendering device of claim 17 is similar in scope to the functions performed by the method of claim 4 and therefore claim 17 is rejected under the same rationale. 8. Claims 7-9 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Kosiorek, further in view of Kheradmand, still further in view of Ciurea et al. (“Ciurea”) [US-2015/0049917-A1] Regarding claim 7, Chen in view of Kosiorek and Kheradmand, discloses the rendering method of claim 6, and further discloses wherein the detecting the occlusion area (see Claim 6 rejection for detailed analysis), and the prior art does not explicitly disclose, but Ciurea discloses the rendering method comprises: detecting an occluded pixel having a visibility value that is greater than or equal to a threshold value in the target image (Ciurea- ¶0249, at least discloses the photometric distance of the pixels is utilized as a measure of similarity and a threshold used to determine pixels that are likely visible and pixels that are likely occluded; ¶0250, at least disclosesWhen the photometric distance of the selected pixel from the reference image and one of the corresponding pixels is less than the threshold, then the corresponding pixel is determined (1112) to be visible. When the photometric distance of the selected pixel from the reference image and one of the corresponding pixels exceeds the threshold, then the corresponding pixel is determined (1114) to be occluded). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Chen/Kosiorek/Kheradmand to incorporate the teachings of Ciurea, and apply the visibility determination into Chen/Kosiorek/Kheradmand’s teachings for detecting an occluded pixel having a visibility value that is greater than or equal to a threshold value in the target image. Doing so would provide an accurate account of the pixel disparity as a result of parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. Regarding claim 8, Chen in view of Kosiorek, Kheradmand and Ciurea, discloses the rendering method of claim 7, and further discloses wherein the correcting the occlusion area (see Claim 2 rejection for detailed analysis) comprises: replacing the occluded pixel in the target image with a pixel of the adjacent image corresponding to a position of the occluded pixel (Chen- Claim 13, at least cites “replacing 3D pixels occluded by the first object in a first image with 3D pixels corresponding to a same location in a second image and not occluded by the first object”). Regarding claim 9, Chen in view of Kosiorek, Kheradmand and Ciurea, discloses the rendering method of claim 7, and further discloses wherein the obtaining the adjacent view (see Claim 2 rejection for detailed analysis) comprises: obtaining a plurality of adjacent views corresponding to the target view (Chen- Fig. 1B shows a photograph of the same locality as shown FIG. 1A but taken from a different viewpoint [adjacent view] where the street sign does not occlude the same portion of the shop sign as occluded in FIG. 1A [a target image]; Fig 1B and ¶0021, at least disclose FIG. 1B is another photograph [an adjacent view], taken from a slightly different point of view from that of FIG. 1A. Comparing FIGS. 1A and 1B, the foreground street sign 100 has “moved” relative to the background shop sign 102. Because of this parallax “movement,” the portion of the shop sign 102 that is blocked in FIG. 1A is now clearly visible in FIG. 1B. (Also, a portion of the shop sign 102 that is visible in FIG. 1A is blocked in FIG. 1B); Fig. 2 shows an overhead schematic of a vehicle collecting images in a street locality), wherein the obtaining of the adjacent image comprises: obtaining a plurality of adjacent images, each of the plurality of adjacent images corresponding to one of the plurality of adjacent views (Chen- Fig. 1B shows a photograph of the same locality as shown FIG. 1A but taken from a different viewpoint [adjacent view] where the street sign does not occlude the same portion of the shop sign as occluded in FIG. 1A; Fig 2A and ¶0024, at least disclose The system in these figures collects images of a locality in the real world. In FIG. 2A, a vehicle 200 is driving down a road 202 […] As the vehicle 200 proceeds down the road 202, the captured images are stored and are associated with the geographical location at which each picture was taken), and wherein the correcting of the occlusion area (see Claim 2 rejection for detailed analysis) comprises: obtaining a pixel of each of the plurality of adjacent images corresponding to a position of the occluded pixel in the target image (Chen- Fig 1B and ¶0021, at least disclose FIG. 1B is another photograph [an adjacent view], taken from a slightly different point of view from that of FIG. 1A. Comparing FIGS. 1A and 1B, the foreground street sign 100 has “moved” relative to the background shop sign 102. Because of this parallax “movement,” the portion of the shop sign 102 that is blocked in FIG. 1A is now clearly visible in FIG. 1B. (Also, a portion of the shop sign 102 that is visible in FIG. 1A is blocked in FIG. 1B); ¶0036, at least discloses Turning back to FIG. 2A, the data representing the signposts 210 are deleted, but that leaves a visual “hole” in the doorway 212 as seen from the point-of-view 206. Several techniques can be applied to fill this visual hole. In FIG. 2B, the point-of-view 214 includes the entire doorway 212 with no occlusion. The pixels taken in FIG. 2B can be used to fill in the visual hole; Claim 13, at least cites “replacing 3D pixels occluded by the first object in a first image with 3D pixels corresponding to a same location in a second image and not occluded by the first object”); and correcting the occluded pixel in the target image based on the pixel of each of the plurality of adjacent images (Chen- Claim 13, at least cites “replacing 3D pixels occluded by the first object in a first image with 3D pixels corresponding to a same location in a second image and not occluded by the first object”). The rendering device of claims 19-20 are similar in scope to the functions performed by the method of claims 7-8 and therefore claims 19-20 are rejected under the same rationale. 9. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Kosiorek, further in view of “A survey on image-based rendering—representation, sampling and compression” by Cha Zhang, Tsuhan Chen (“Zhang”) Regarding claim 10, Chen in view of Kosiorek, discloses the rendering method of claim 1, and does not explicitly disclose, but Zhang discloses wherein the obtaining the adjacent view comprises: obtaining the adjacent view by sampling views within a preset camera rotation angle based on the target view (Zhang- Fig. 4 and page 7, section 2.2.4. 3D—concentric mosaics and panoramic video, 1st and 2nd paragraphs, at least disclose In concentric mosaics, the scene is captured by mounting a camera at the end of a level beam, and shooting images at regular intervals as the beam rotates, as is shown in Fig. 4. The light rays are then indexed by the camera position or the beam rotation angle a; and the pixel locations ðu; vÞ […] This parameterization is equivalent to having many slit cameras rotating around a common center and taking images along the tangent direction […] During the rendering, the viewer may move freely inside a rendering circle (Fig. 4) with radius R sinðFOV=2Þ; where R is the camera path radius and FOV is the field of view of the cameras). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Chen/Kosiorek to incorporate the teachings of Zhang, and apply the beam rotation angle into Chen/Kosiorek’s teachings for obtaining the adjacent view by sampling views within a preset camera rotation angle based on the target view. Doing so would reproduce the scene correctly at an arbitrary viewpoint, with unknown or limited amount of geometry. 10. Claims 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Kosiorek, further in view of Rong et al. (“Rong”) [US-2021/0383616-A1] Regarding claim 11, Chen in view of Kosiorek, discloses the rendering method of claim 1, and further discloses wherein the obtaining the target image comprises: obtaining a rendered image corresponding to the target view (Chen- Fig. 1A shows a photograph of a street locality showing a shop sign partially occluded by a street sign with the first viewpoint [a target image]; ¶0020, at least discloses a photograph of street scene with buildings adjoining the street. In the view of FIG. 1A, a street sign 100 partially blocks the view of a sign 102 on one of the shops next to the street [obtaining a rendered image corresponding to the target view]) . The prior art does not explicitly disclose, but Rong discloses a depth map corresponding to the target view (Rong- ¶0047, at least discloses view warping may begin by rendering the selected object data's 3D mesh model at selected target viewpoint to generate the corresponding target depth. The rendered depth map of the object data set along with the source camera images may be used to generate the object's 2D texture map using an inverse warping operation). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Chen/Kosiorek to incorporate the teachings of Rong, and apply the depth map into Chen/Kosiorek’s teachings for obtaining a rendered image corresponding to the target view and a depth map corresponding to the target view. Doing so would provide augmented data that used to test safety features of software for a self-driving vehicle. Regarding claim 12, Chen in view of Kosiorek, discloses the rendering method of claim 1, and further discloses wherein the obtaining the adjacent image comprises: obtaining a rendered image corresponding to the adjacent view (Chen- Fig. 1B shows a photograph of the same locality as shown FIG. 1A but taken from a different viewpoint [an adjacent view] where the street sign does not occlude the same portion of the shop sign as occluded in FIG. 1A [a target image]; Fig 1B and ¶0021, at least disclose FIG. 1B is another photograph [an adjacent view], taken from a slightly different point of view from that of FIG. 1A). . The prior art does not explicitly disclose, but Rong discloses a depth map corresponding to the view (Rong- ¶0049, at least discloses The interpolation may be used to obtain the estimated depths to generate an estimated depth map of the image. The object data set may be processed to render the depth of the object). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Chen/Kosiorek to incorporate the teachings of Rong, and apply the depth map into Chen/Kosiorek’s teachings for obtaining a rendered image corresponding to the adjacent view and a depth map corresponding to the adjacent view. Doing so would provide augmented data that used to test safety features of software for a self-driving vehicle. Conclusion 11. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. They are as recited in the attached PTO-892 form. 12. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL LE whose telephone number is (571)272-5330. The examiner can normally be reached 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL LE/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

May 16, 2024
Application Filed
Mar 15, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579211
AUTOMATED SHIFTING OF WEB PAGES BETWEEN DIFFERENT USER DEVICES
2y 5m to grant Granted Mar 17, 2026
Patent 12579738
INFORMATION PRESENTING METHOD, SYSTEM THEREOF, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12579072
GRAPHICS PROCESSOR REGISTER FILE INCLUDING A LOW ENERGY PORTION AND A HIGH CAPACITY PORTION
2y 5m to grant Granted Mar 17, 2026
Patent 12573094
COMPRESSION AND DECOMPRESSION OF SUB-PRIMITIVE PRESENCE INDICATIONS FOR USE IN A RENDERING SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12558788
SYSTEM AND METHOD FOR REAL-TIME ANIMATION INTERACTIVE EDITING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
88%
With Interview (+22.1%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 864 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month