Prosecution Insights
Last updated: April 19, 2026
Application No. 18/448,666

IMAGE PROCESSING DEVICE AND BLURRED IMAGE GENERATION METHOD

Final Rejection §103
Filed
Aug 11, 2023
Examiner
ZAK, JACQUELINE ROSE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
SK Hynix Inc.
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
55%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
8 granted / 12 resolved
+4.7% vs TC avg
Minimal -11% lift
Without
With
+-11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
46 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1, and 3-20 are pending for examination in the application filed 11/23/2025. Claims 1, 3-6, 10, 13-16, and 20 have been amended and claim 2 has been cancelled. Priority Acknowledgement is made of Applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent application KR10-2022-0179694 filed on 12/20/2022. Response to Arguments and Amendments Applicant’s arguments with respect to independent claims 1, 13, and 20 have been considered but are moot because the new ground of rejection does not rely on the combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument, as facilitated by the newly added amendments. The amendments to claim 1 include limitations previously presented in dependent claims 2 and 6, which are taught by Geddes in view of Christian (see page 6 of the Non-Final Rejection filed 08/22/2025) and Geddes in view of Christian and Lindskog (see page 9-10 of the Non-Final Rejection filed 08/22/2025), respectively. Applicant argues in the Remarks filed 11/23/2025 that Geddes does not teach the newly added amendments. Please see below for the 35 USC § 103 rejection of independent claims 1, 13, and 20 of Geddes in view of Christian and Lindskog, as facilitated by the newly added amendments. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-7, 10-15, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Geddes (US20230344666A1) in view of Christian (US9848167B1) and Lindskog (US20200082535A1). Regarding claim 1, Geddes teaches an image processing device comprising: a pre-processor ([0040] The computing device 200 includes components or units, such as a processor 202) configured to extract a background area of an image based on pixel data received from an external device (camera of client device 502) and generate a low-resolution sub-image in which the background area is downscaled ([0017] According to implementations of this disclosure, where a bandwidth reduction affecting such a conference participant is detected, the video stream is segmented into a background and a foreground, in which the foreground is the portion of the video stream images depicting the conference participant and the background is the remaining portion of the video stream images. The background may then be treated as a virtual background described above, such as by the background being adjusted (e.g., by a decrease in a resolution of the background) without also adjusting the foreground (e.g., while maintaining a resolution of the foreground)); a background blur component (client application 510) configured to generate an intermediate image in which a blur operation is performed on the low-resolution sub-image ([0074] Based on a detection of a bandwidth reduction affecting the conference participant using the client device 502, the client application 510 may cause the virtual background software 512 to adjust one or more portions of the video stream being transmitted from the user device 502. [0097] Adjusting the background without also adjusting the foreground can also or instead include blurring the background based on the bandwidth reduction) and upscale the intermediate image to original resolution ([0092] The resolution of the virtual background can be restored (e.g., to the original resolution of the virtual background prior to the detection of the bandwidth reduction) based on the bandwidth of the computing device increasing); and an image compositor (virtual background software) configured to generate a blurred image in which a foreground area of the image is composited to the upscaled intermediate image ([0069] The client application 510 obtains the composite images from the virtual background software 512 and transmits them as the video stream for the conference participant to the conferencing software 508. [0082] FIG. 6B is an illustration of an example of the composite image 600 of FIG. 6A including the foreground 602 and a virtual background 604B adjusted based on a detected bandwidth reduction. The virtual background 604B thus is a version of the virtual background 604A of FIG. 6A which has been adjusted). Geddes does not teach wherein the pre-processor determines a target resolution of the low-resolution sub-image based on an intensity of the blur operation, and performs a downscaling operation according to the target resolution. Christian, in the same field of endeavor of blurred image generation, teaches wherein the pre-processor determines a target resolution of the low-resolution sub-image based on an intensity of the blur operation, and performs a downscaling operation according to the target resolution ([col. 2 ln. 29-34] To reduce a bandwidth consumption and/or processing consumption while sending blurred video data to the second user, devices, systems and methods are disclosed that provide a standby mode that generates low resolution video data at a local device and sends the low resolution video data to a remote device. [col. 4 ln. 17-29] The device 102 may downsample (130) the second video data using a graphics processing unit (GPU) to generate downsampled video data, may optionally apply (132) a blurring process (e.g., apply a Gaussian blur or the like) to the downsampled video data to generate blurred video data and may send (134) the blurred video data. For example, the first device 102a may downsample the second video data from the second resolution to a third resolution (e.g., 12 pixels by 12 pixels or the like) using techniques known to one of skill in the art, such as bilinear downsampling, bilinear interpolation, bicubic interpolation, decimation, or the like. The device 102 may optionally apply the blurring process to distort the downsampled video data). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Geddes with the teachings of Christian to downscale the image resolution based on the blur intensity because "the blurring process consumes processing power of the local device and the blurred video data consumes bandwidth between the local device and the remote device" [col. 2 ln. 25-28]. Geddes does not teach wherein the background blur component determines an intensity of the blur operation based on a depth of field of the blurred image, the depth of field indicating a ratio of a portion corresponding to an in-focus point in the blurred image. Lindskog, in the same field of endeavor of image background segmentation, teaches wherein the background blur component determines an intensity of the blur operation based on a depth of field of the blurred image, the depth of field indicating a ratio of a portion corresponding to an in-focus point in the blurred image ([0011] According to some embodiments disclosed herein, the camera devices may utilize one (or more) cameras and image sensors to capture an input image of a scene, as well as corresponding depth/disparity information for the captured scene, which may provide an initial estimate of the depth of the various objects in the captured scene and, by extension, an indication of the portions of the captured image that are believed to be in the scene's background and/or foreground…[0011] According to some such embodiments, the depth information data may be converted into the form of an initial blur map, e.g., a two-dimensional array of values, wherein each value represents a radius, diameter (or other size-indicative parameter) of the blurring operation to be applied to the corresponding pixel in the captured image in a blurring operation. [0005] For example, in such portrait-style, synthetic SDOF images, a greater amount of blurring may be applied to objects and pixels that are estimated to be farther away from the focal plane of a captured scene. In other words, in synthetic SDOF images having a focal plane in the foreground of the captured scene, objects that are “deeper” in the captured scene may have a greater amount of blurring applied to them, whereas in focus foreground objects, such as a human subject, may remain relatively sharper, thus pleasantly emphasizing the appearance of the human subject to a viewer of the image). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Geddes with the teachings of Lindskog to determine the blur intensity based on the depth of field because "to achieve an image having a shallower depth of field, it may be necessary to artificially synthesize an out-of-focus blur in the image after it is captured, e.g., by using estimated depth maps for the captured images" [Lindskog 0004] and to determine the ratio of the portion corresponding to an in-focus point "in synthetic SDOF images having a focal plane in the foreground of the captured scene, objects that are 'deeper' in the captured scene may have a greater amount of blurring applied to them, whereas in focus foreground objects, such as a human subject, may remain relatively sharper, thus pleasantly emphasizing the appearance of the human subject to a viewer of the image" [Lindskog 0005]. Regarding claim 3, Geddes, Christian, and Lindskog teach the device of claim 1. Christian teaches wherein the pre-processor determines the target resolution to be less as the intensity of the blur operation increases ([col. 4 ln. 38-53] In the example illustrated in FIG. 1, the second device 102b may receive the first video data at a first time and may display the first video data on the display 108 to a second user. The first video data may have a relatively high bandwidth consumption and high image quality and may include details that enable the second user to identify an identity of the first user and/or objects of interest in the first video data. Later, the second device 102b may receive the blurred video data at a second time after the first time and may display video(s) based on the blurred video data on the display 108. In contrast to the first video data, the blurred video data may have a relatively low bandwidth consumption and low image quality and may obscure details such that the presence of the first user and/or objects of interest in the first environment can be determined but identities of the first user and/or objects of interest cannot be determined by the second user). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Geddes with the teachings of Christian for the resolution to be less as the blur intensity increases because "the blurred video data reduces a bandwidth consumption while offering the first user privacy until the first user instructs the first device 102a to enter the active mode" [col. 4 ln. 54-56]. Regarding claim 4, Geddes, Christian, and Lindskog teach the device of claim 1. Lindskog teaches wherein the pre-processor determines the background area and the foreground area based on depth information of the image ([0011] According to some embodiments disclosed herein, the camera devices may utilize one (or more) cameras and image sensors to capture an input image of a scene, as well as corresponding depth/disparity information for the captured scene, which may provide an initial estimate of the depth of the various objects in the captured scene and, by extension, an indication of the portions of the captured image that are believed to be in the scene's background and/or foreground). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Geddes with the teachings of Lindskog to determine the background and foreground area based on depth information because "in synthetic SDOF images having a focal plane in the foreground of the captured scene, objects that are 'deeper' in the captured scene may have a greater amount of blurring applied to them, whereas in focus foreground objects, such as a human subject, may remain relatively sharper, thus pleasantly emphasizing the appearance of the human subject to a viewer of the image" [Lindskog 0005]. Regarding claim 5, Geddes, Christian, and Lindskog teach the device of claim 1. Christian teaches wherein the pre-processor performs the downscaling operation in a bilinear interpolation method, and generates the low-resolution sub-image having the target resolution ([col. 2 ln. 29-34] To reduce a bandwidth consumption and/or processing consumption while sending blurred video data to the second user, devices, systems and methods are disclosed that provide a standby mode that generates low resolution video data at a local device and sends the low resolution video data to a remote device. [col. 4 ln. 17-29] The device 102 may downsample (130) the second video data using a graphics processing unit (GPU) to generate downsampled video data, may optionally apply (132) a blurring process (e.g., apply a Gaussian blur or the like) to the downsampled video data to generate blurred video data and may send (134) the blurred video data. For example, the first device 102a may downsample the second video data from the second resolution to a third resolution (e.g., 12 pixels by 12 pixels or the like) using techniques known to one of skill in the art, such as bilinear downsampling, bilinear interpolation, bicubic interpolation, decimation, or the like. The device 102 may optionally apply the blurring process to distort the downsampled video data). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Geddes with the teachings of Christian to downscale the image using bilinear interpolation because "the blurring process consumes processing power of the local device and the blurred video data consumes bandwidth between the local device and the remote device" [col. 2 ln. 25-28]. Regarding claim 6, Geddes, Christian, and Lindskog teach the device of claim 1. Lindskog teaches wherein the background blur component performs the blur operation according to the intensity of the blur operation ([0011] According to some such embodiments, the depth information data may be converted into the form of an initial blur map, e.g., a two-dimensional array of values, wherein each value represents a radius, diameter (or other size-indicative parameter) of the blurring operation to be applied to the corresponding pixel in the captured image in a blurring operation. [0012] Based on the obtained segmentation mask, a determined amount of blurring may be subtracted from the amount of blurring indicated in the initial blur map for portions of the captured image that have been segmented out as being “people” (or other type of segmented object in the scene that is desired to be in focus in a given implementation)). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Geddes with the teachings of Lindskog to perform the blur operation according to the intensity because "to achieve an image having a shallower depth of field, it may be necessary to artificially synthesize an out-of-focus blur in the image after it is captured, e.g., by using estimated depth maps for the captured images" [Lindskog 0004]. Regarding claim 7, Geddes, Christian, and Lindskog teach the device of claim 6. Lindskog teaches wherein the background blur component determines the intensity of the blur operation to be greater as the depth of field is shallower ([0034] Turning now to FIG. 1D, an initial blur map 120 for the image 100 shown in FIG. 1A is illustrated. In the convention of initial blur map 120, brighter pixels reflect pixels that are estimated to be farther from the focal plane, e.g., deeper, in the scene (thus resulting in a greater amount of blurring being applied during an SDOF rendering process), and darker pixels reflect pixels that are estimated to be closer to the focal plane, e.g., shallower, in the scene (thus resulting in a lesser amount of blurring being applied during an SDOF rendering process). As shown in FIG. 1D, the various human subjects 102/104/106 from image 100 are represented in the initial blur map 120 at positions 122/124/126, respectively. Initial blur map 120 also reflects the fact that subject 122 will receive comparatively less blurring than subjects 124/126, located deeper in the captured scene). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Geddes with the teachings of Lindskog for the blur intensity to be greater as the depth of field is shallower because "to achieve an image having a shallower depth of field, it may be necessary to artificially synthesize an out-of-focus blur in the image after it is captured, e.g., by using estimated depth maps for the captured images" [Lindskog 0004]. Regarding claim 10, Geddes, Christian, and Lindskog teach the device of claim 1. Geddes further teaches wherein the background blur component performs the upscaling operation on the intermediate image in response to the downscaling operation ([0017] The background may then be treated as a virtual background described above, such as by the background being adjusted (e.g., by a decrease in a resolution of the background) without also adjusting the foreground (e.g., while maintaining a resolution of the foreground). [0092] The resolution of the virtual background can be restored (e.g., to the original resolution of the virtual background prior to the detection of the bandwidth reduction) based on the bandwidth of the computing device increasing). Regarding claim 11, Geddes, Christian, and Lindskog teach the device of claim 10. Christian teaches wherein the background blur component performs the upscaling operation in a bicubic interpolation method, and generates the intermediate image having the original resolution ([col. 11 ln. 54-67] A second GPU 420b on the second device 102b may perform upsampling 442 on the downsampled video data 312 (or blurred video data generated by the CPU 430 by applying the blurring process 432 to the downsampled video data 312) to generate the upsampled video data 314 having a fourth resolution, which is larger than the third resolution and may be larger than the first resolution and/or the second resolution. For example, the second GPU 420b may generate the upsampled video data 314 based on a resolution of the display 108 of the second device 102b, which may have a larger resolution than a maximum resolution of the camera 104. The first GPU 420a and/or the second GPU 420b may perform the downsampling 422 and/or upsampling 442 using bilinear upsampling/downsampling, bilinear interpolation, bicubic interpolation, decimation, or the like). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Geddes with the teachings of Christian to perform upscaling in bicubic interpolation because "the upsampled video data indicates an environment of the local device while blurring details, enabling a user of the remote device to identify movement or activity while maintaining privacy for anyone near the local device" [Christian col. 2 ln. 44-48]. Regarding claim 12, Geddes, Christian, and Lindskog teach the device of claim 6. Lindskog teaches wherein the pre-processor determines a size of the extracted background area based on the depth of field ([0011] depth/disparity information for the captured scene, which may provide an initial estimate of the depth of the various objects in the captured scene and, by extension, an indication of the portions of the captured image that are believed to be in the scene's background and/or foreground…According to some such embodiments, the depth information data may be converted into the form of an initial blur map, e.g., a two-dimensional array of values, wherein each value represents a radius, diameter (or other size-indicative parameter) of the blurring operation to be applied to the corresponding pixel in the captured image in a blurring operation). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Geddes with the teachings of Lindskog to determine the size of the background based on the depth of field because "in synthetic SDOF images having a focal plane in the foreground of the captured scene, objects that are 'deeper' in the captured scene may have a greater amount of blurring applied to them, whereas in focus foreground objects, such as a human subject, may remain relatively sharper, thus pleasantly emphasizing the appearance of the human subject to a viewer of the image" [Lindskog 0005]. Regarding claim 13, Geddes teaches an image processing method (Fig. 7-8) comprising: extracting a background area of an image based on pixel data received from an external device (camera of client device 502); generating a low-resolution sub-image in which a downscaling operation is performed on the background area ([0017] According to implementations of this disclosure, where a bandwidth reduction affecting such a conference participant is detected, the video stream is segmented into a background and a foreground, in which the foreground is the portion of the video stream images depicting the conference participant and the background is the remaining portion of the video stream images. The background may then be treated as a virtual background described above, such as by the background being adjusted (e.g., by a decrease in a resolution of the background) without also adjusting the foreground (e.g., while maintaining a resolution of the foreground)); performing a blur operation on the low-resolution sub-image ([0074] Based on a detection of a bandwidth reduction affecting the conference participant using the client device 502, the client application 510 may cause the virtual background software 512 to adjust one or more portions of the video stream being transmitted from the user device 502. [0097] Adjusting the background without also adjusting the foreground can also or instead include blurring the background based on the bandwidth reduction) upscaling an intermediate on which the blur operation is performed to original resolution ([0092] The resolution of the virtual background can be restored (e.g., to the original resolution of the virtual background prior to the detection of the bandwidth reduction) based on the bandwidth of the computing device increasing); and compositing the upscaled intermediate image and a foreground area of the image ([0069] The client application 510 obtains the composite images from the virtual background software 512 and transmits them as the video stream for the conference participant to the conferencing software 508. [0082] FIG. 6B is an illustration of an example of the composite image 600 of FIG. 6A including the foreground 602 and a virtual background 604B adjusted based on a detected bandwidth reduction. The virtual background 604B thus is a version of the virtual background 604A of FIG. 6A which has been adjusted). Geddes does not teach wherein generating the low-resolution sub-image comprises: determining a target resolution of the low-resolution sub-image based on an intensity of the blur operation. Christian, in the same field of endeavor of blurred image generation, teaches wherein generating the low-resolution sub-image comprises: determining a target resolution of the low-resolution sub-image based on an intensity of the blur operation ([col. 2 ln. 29-34] To reduce a bandwidth consumption and/or processing consumption while sending blurred video data to the second user, devices, systems and methods are disclosed that provide a standby mode that generates low resolution video data at a local device and sends the low resolution video data to a remote device. [col. 4 ln. 17-29] The device 102 may downsample (130) the second video data using a graphics processing unit (GPU) to generate downsampled video data, may optionally apply (132) a blurring process (e.g., apply a Gaussian blur or the like) to the downsampled video data to generate blurred video data and may send (134) the blurred video data. For example, the first device 102a may downsample the second video data from the second resolution to a third resolution (e.g., 12 pixels by 12 pixels or the like) using techniques known to one of skill in the art, such as bilinear downsampling, bilinear interpolation, bicubic interpolation, decimation, or the like. The device 102 may optionally apply the blurring process to distort the downsampled video data). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Geddes with the teachings of Christian to downscale the image resolution based on the blur intensity because "the blurring process consumes processing power of the local device and the blurred video data consumes bandwidth between the local device and the remote device" [col. 2 ln. 25-28]. Geddes does not teach determining an intensity of the blur operation based on a depth of field of the image, the depth of field indicating a ratio of a portion corresponding to an in-focus point in the blurred image. Lindskog, in the same field of endeavor of image background segmentation, teaches determining an intensity of the blur operation based on a depth of field of the blurred image, the depth of field indicating a ratio of a portion corresponding to an in-focus point in the blurred image ([0011] According to some embodiments disclosed herein, the camera devices may utilize one (or more) cameras and image sensors to capture an input image of a scene, as well as corresponding depth/disparity information for the captured scene, which may provide an initial estimate of the depth of the various objects in the captured scene and, by extension, an indication of the portions of the captured image that are believed to be in the scene's background and/or foreground…[0011] According to some such embodiments, the depth information data may be converted into the form of an initial blur map, e.g., a two-dimensional array of values, wherein each value represents a radius, diameter (or other size-indicative parameter) of the blurring operation to be applied to the corresponding pixel in the captured image in a blurring operation. [0005] For example, in such portrait-style, synthetic SDOF images, a greater amount of blurring may be applied to objects and pixels that are estimated to be farther away from the focal plane of a captured scene. In other words, in synthetic SDOF images having a focal plane in the foreground of the captured scene, objects that are “deeper” in the captured scene may have a greater amount of blurring applied to them, whereas in focus foreground objects, such as a human subject, may remain relatively sharper, thus pleasantly emphasizing the appearance of the human subject to a viewer of the image). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Geddes with the teachings of Lindskog to determine the blur intensity based on the depth of field because "to achieve an image having a shallower depth of field, it may be necessary to artificially synthesize an out-of-focus blur in the image after it is captured, e.g., by using estimated depth maps for the captured images" [Lindskog 0004] and to determine the ratio of the portion corresponding to an in-focus point "in synthetic SDOF images having a focal plane in the foreground of the captured scene, objects that are 'deeper' in the captured scene may have a greater amount of blurring applied to them, whereas in focus foreground objects, such as a human subject, may remain relatively sharper, thus pleasantly emphasizing the appearance of the human subject to a viewer of the image" [Lindskog 0005]. Regarding claim 14, Geddes, Christian, and Lindskog teach the method of claim 13. Lindskog teaches wherein the extracting of the background area comprises: determining the background area based on depth information of the image ([0011] According to some embodiments disclosed herein, the camera devices may utilize one (or more) cameras and image sensors to capture an input image of a scene, as well as corresponding depth/disparity information for the captured scene, which may provide an initial estimate of the depth of the various objects in the captured scene and, by extension, an indication of the portions of the captured image that are believed to be in the scene's background and/or foreground); and determining a size of the background area based on the depth of field for the blur operation ([0011] corresponding depth/disparity information for the captured scene, which may provide an initial estimate of the depth of the various objects in the captured scene and, by extension, an indication of the portions of the captured image that are believed to be in the scene's background and/or foreground…According to some such embodiments, the depth information data may be converted into the form of an initial blur map, e.g., a two-dimensional array of values, wherein each value represents a radius, diameter (or other size-indicative parameter) of the blurring operation to be applied to the corresponding pixel in the captured image in a blurring operation). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Geddes with the teachings of Lindskog to determine the background area based on depth information because "in synthetic SDOF images having a focal plane in the foreground of the captured scene, objects that are 'deeper' in the captured scene may have a greater amount of blurring applied to them, whereas in focus foreground objects, such as a human subject, may remain relatively sharper, thus pleasantly emphasizing the appearance of the human subject to a viewer of the image" [Lindskog 0005]. Regarding claim 15, Geddes, Christian, and Lindskog teach the method of claim 14. Geddes teaches wherein generating the low-resolution sub-image comprises: performing the downscaling operation on the background area ([0017] According to implementations of this disclosure, where a bandwidth reduction affecting such a conference participant is detected, the video stream is segmented into a background and a foreground, in which the foreground is the portion of the video stream images depicting the conference participant and the background is the remaining portion of the video stream images. The background may then be treated as a virtual background described above, such as by the background being adjusted (e.g., by a decrease in a resolution of the background) without also adjusting the foreground (e.g., while maintaining a resolution of the foreground)). Geddes does not teach performing the downscaling operation in a bilinear interpolation method according to the target resolution. Christian teaches performing the downscaling operation in a bilinear interpolation method according to the target resolution ([col. 2 ln. 29-34] To reduce a bandwidth consumption and/or processing consumption while sending blurred video data to the second user, devices, systems and methods are disclosed that provide a standby mode that generates low resolution video data at a local device and sends the low resolution video data to a remote device. [col. 4 ln. 17-29] The device 102 may downsample (130) the second video data using a graphics processing unit (GPU) to generate downsampled video data, may optionally apply (132) a blurring process (e.g., apply a Gaussian blur or the like) to the downsampled video data to generate blurred video data and may send (134) the blurred video data. For example, the first device 102a may downsample the second video data from the second resolution to a third resolution (e.g., 12 pixels by 12 pixels or the like) using techniques known to one of skill in the art, such as bilinear downsampling, bilinear interpolation, bicubic interpolation, decimation, or the like. The device 102 may optionally apply the blurring process to distort the downsampled video data). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Geddes with the teachings of Christian to downscale the image using bilinear interpolation because "the blurring process consumes processing power of the local device and the blurred video data consumes bandwidth between the local device and the remote device" [col. 2 ln. 25-28]. Regarding claim 18, Geddes, Christian, and Lindskog teach the method of claim 15. Christian teaches wherein the upscaling comprises: determining the original resolution corresponding to the target resolution; upscaling the intermediate image in a bicubic interpolation method; and generating the intermediate image having the original resolution ([col. 11 ln. 54-67] A second GPU 420b on the second device 102b may perform upsampling 442 on the downsampled video data 312 (or blurred video data generated by the CPU 430 by applying the blurring process 432 to the downsampled video data 312) to generate the upsampled video data 314 having a fourth resolution, which is larger than the third resolution and may be larger than the first resolution and/or the second resolution. For example, the second GPU 420b may generate the upsampled video data 314 based on a resolution of the display 108 of the second device 102b, which may have a larger resolution than a maximum resolution of the camera 104. The first GPU 420a and/or the second GPU 420b may perform the downsampling 422 and/or upsampling 442 using bilinear upsampling/downsampling, bilinear interpolation, bicubic interpolation, decimation, or the like). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Geddes with the teachings of Christian to perform upscaling in bicubic interpolation because "the upsampled video data indicates an environment of the local device while blurring details, enabling a user of the remote device to identify movement or activity while maintaining privacy for anyone near the local device" [Christian col. 2 ln. 44-48]. Regarding claim 19, Geddes, Christian, and Lindskog teach the method of claim 13. Geddes further teaches wherein compositing the intermediate image and the foreground area comprises generating a blurred image in which the foreground area is composited to the intermediate image having the same resolution as the foreground area ([0097] Adjusting the background without also adjusting the foreground can also or instead include blurring the background based on the bandwidth reduction. [0082] FIG. 6B is an illustration of an example of the composite image 600 of FIG. 6A including the foreground 602 and a virtual background 604B adjusted based on a detected bandwidth reduction. The virtual background 604B thus is a version of the virtual background 604A of FIG. 6A which has been adjusted. [0092] The resolution of the virtual background can be restored (e.g., to the original resolution of the virtual background prior to the detection of the bandwidth reduction) based on the bandwidth of the computing device increasing). Regarding claim 20, Geddes teaches an image processing system comprising: an image sensor configured to generate pixel data including brightness information ([0069] The virtual background software 512 performs a segmentation process to effectively clip out, from the images of the video stream captured by the camera of the client device 502, a portion of the images depicting the conference participant (i.e., the user of the client device 502). [0095] The foreground depicts the conference participant and thus represents a portion of images of the video stream that includes pixel information representative of the conference participant. The background corresponds to the remaining portion of the images of the video stream); and an image processing device (computing device 200) configured to generate a blurred image in which a background area of an image is blurred based on the pixel data ([0097] Adjusting the background without also adjusting the foreground can also or instead include blurring the background based on the bandwidth reduction), wherein the image processing device comprises: a pre-processor ([0040] The computing device 200 includes components or units, such as a processor 202) configured to extract a background area of an image and generate a low-resolution sub-image for the background area ([0017] According to implementations of this disclosure, where a bandwidth reduction affecting such a conference participant is detected, the video stream is segmented into a background and a foreground, in which the foreground is the portion of the video stream images depicting the conference participant and the background is the remaining portion of the video stream images. The background may then be treated as a virtual background described above, such as by the background being adjusted (e.g., by a decrease in a resolution of the background) without also adjusting the foreground (e.g., while maintaining a resolution of the foreground)); a background blur component (client application 510) configured to generate an intermediate image in which a blur operation is performed on the low-resolution sub-image ([0074] Based on a detection of a bandwidth reduction affecting the conference participant using the client device 502, the client application 510 may cause the virtual background software 512 to adjust one or more portions of the video stream being transmitted from the user device 502. [0097] Adjusting the background without also adjusting the foreground can also or instead include blurring the background based on the bandwidth reduction) and increase resolution of the intermediate image ([0092] The resolution of the virtual background can be restored (e.g., to the original resolution of the virtual background prior to the detection of the bandwidth reduction) based on the bandwidth of the computing device increasing); and an image compositor (virtual background software) configured to composite the intermediate image of which resolution is increased to a foreground area of the image ([0069] The client application 510 obtains the composite images from the virtual background software 512 and transmits them as the video stream for the conference participant to the conferencing software 508. [0082] FIG. 6B is an illustration of an example of the composite image 600 of FIG. 6A including the foreground 602 and a virtual background 604B adjusted based on a detected bandwidth reduction. The virtual background 604B thus is a version of the virtual background 604A of FIG. 6A which has been adjusted. [0092] The resolution of the virtual background can be restored (e.g., to the original resolution of the virtual background prior to the detection of the bandwidth reduction) based on the bandwidth of the computing device increasing). Geddes does not teach generate a low-resolution sub-image based on an intensity of a blur operation, wherein the pre-processor determines a target resolution of the low-resolution sub-image based on an intensity of the blur operation, and performs a downscaling operation according to the target resolution. Christian, in the same field of endeavor of blurred image generation, teaches generate a low-resolution sub-image on an intensity of the blur operation, wherein the pre-processor determines a target resolution of the low-resolution sub-image based on an intensity of the blur operation, and performs a downscaling operation according to the target resolution ([col. 2 ln. 29-34] To reduce a bandwidth consumption and/or processing consumption while sending blurred video data to the second user, devices, systems and methods are disclosed that provide a standby mode that generates low resolution video data at a local device and sends the low resolution video data to a remote device. [col. 4 ln. 17-29] The device 102 may downsample (130) the second video data using a graphics processing unit (GPU) to generate downsampled video data, may optionally apply (132) a blurring process (e.g., apply a Gaussian blur or the like) to the downsampled video data to generate blurred video data and may send (134) the blurred video data. For example, the first device 102a may downsample the second video data from the second resolution to a third resolution (e.g., 12 pixels by 12 pixels or the like) using techniques known to one of skill in the art, such as bilinear downsampling, bilinear interpolation, bicubic interpolation, decimation, or the like. The device 102 may optionally apply the blurring process to distort the downsampled video data). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Geddes with the teachings of Christian to downscale the image resolution based on the blur intensity because "the blurring process consumes processing power of the local device and the blurred video data consumes bandwidth between the local device and the remote device" [col. 2 ln. 25-28]. Geddes does not teach generate depth information; extract the background area of the image based on the depth information, wherein the background blur component determines an intensity of the blur operation based on a depth of field of the blurred image, the depth of field indicating a ratio of a portion corresponding to an in-focus point in the blurred image. Lindskog, in the same field of endeavor of image background segmentation, teaches generate depth information; extract the background area of the image based on the depth information, wherein the background blur component determines an intensity of the blur operation based on a depth of field of the blurred image, the depth of field indicating a ratio of a portion corresponding to an in-focus point in the blurred image ([0011] According to some embodiments disclosed herein, the camera devices may utilize one (or more) cameras and image sensors to capture an input image of a scene, as well as corresponding depth/disparity information for the captured scene, which may provide an initial estimate of the depth of the various objects in the captured scene and, by extension, an indication of the portions of the captured image that are believed to be in the scene's background and/or foreground…[0011] According to some such embodiments, the depth information data may be converted into the form of an initial blur map, e.g., a two-dimensional array of values, wherein each value represents a radius, diameter (or other size-indicative parameter) of the blurring operation to be applied to the corresponding pixel in the captured image in a blurring operation. [0005] For example, in such portrait-style, synthetic SDOF images, a greater amount of blurring may be applied to objects and pixels that are estimated to be farther away from the focal plane of a captured scene. In other words, in synthetic SDOF images having a focal plane in the foreground of the captured scene, objects that are “deeper” in the captured scene may have a greater amount of blurring applied to them, whereas in focus foreground objects, such as a human subject, may remain relatively sharper, thus pleasantly emphasizing the appearance of the human subject to a viewer of the image). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Geddes with the teachings of Lindskog to extract the background area based on depth information because "in synthetic SDOF images having a focal plane in the foreground of the captured scene, objects that are 'deeper' in the captured scene may have a greater amount of blurring applied to them, whereas in focus foreground objects, such as a human subject, may remain relatively sharper, thus pleasantly emphasizing the appearance of the human subject to a viewer of the image" [Lindskog 0005] and to determine the blur intensity based on the depth of field because "to achieve an image having a shallower depth of field, it may be necessary to artificially synthesize an out-of-focus blur in the image after it is captured, e.g., by using estimated depth maps for the captured images" [Lindskog 0004] and to determine the ratio of the portion corresponding to an in-focus point "in synthetic SDOF images having a focal plane in the foreground of the captured scene, objects that are 'deeper' in the captured scene may have a greater amount of blurring applied to them, whereas in focus foreground objects, such as a human subject, may remain relatively sharper, thus pleasantly emphasizing the appearance of the human subject to a viewer of the image" [Lindskog 0005]. Claims 8-9 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Geddes in view of Christian, Lindskog, and Wajs (US20160255323A1). Regarding claim 8, Geddes, Christian, and Lindskog teach the device of claim 6. Geddes teaches the low-resolution sub image ([0017] The background may then be treated as a virtual background described above, such as by the background being adjusted (e.g., by a decrease in a resolution of the background) without also adjusting the foreground (e.g., while maintaining a resolution of the foreground)). Wajs, in the same field of endeavor of blur kernel convolutions, teaches wherein the background blur component generates a kernel of a point spread function corresponding to the intensity of the blur operation ([0062] blurriness is measured by the blur size or point spread function (PSF) of the imaging system), and performs a convolution operation of the image and the kernel ([0082] in FIG. 6, the IR image is convolved with many different blur kernels). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Geddes with the teachings of Wajs to use a point spread function and convolution for "estimating the object distance based on the color and infrared blur spots" [Wajs 0065]. Regarding claim 9, Geddes, Christian, Lindskog, and Wajs teach the device of claim 8. Wajs teaches wherein the background blur component determines a size of the kernel based on the target resolution ([0089] The size of the blur kernel may be selected to reduce computation (e.g., by down-sampling) and also possibly in order to provide sufficient resolution for the depth estimation). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Geddes with the teachings of Wajs to determine the kernel size based on the target resolution "to reduce computation while still maintaining a finer resolution" [Wajs 0078]. Regarding claim 16, Geddes, Christian, and Lindskog teach the method of claim 15. Geddes teaches the low-resolution sub image ([0017] The background may then be treated as a virtual background described above, such as by the background being adjusted (e.g., by a decrease in a resolution of the background) without also adjusting the foreground (e.g., while maintaining a resolution of the foreground)). Geddes does not teach wherein performing the blur operation comprises: generating a kernel of a point spread function corresponding to the intensity of the blur operation; and performing a convolution operation of the image and the kernel. Wajs teaches wherein performing the blur operation comprises: generating a kernel of a point spread function corresponding to the intensity of the blur operation ([0062] blurriness is measured by the blur size or point spread function (PSF) of the imaging system), and performing a convolution operation of the image and the kernel ([0082] in FIG. 6, the IR image is convolved with many different blur kernels). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Geddes with the teachings of Wajs to use a point spread function and convolution for "estimating the object distance based on the color and infrared blur spots" [Wajs 0065]. Regarding claim 17, Geddes, Christian, Lindskog, and Wajs teach the method of claim 16. Wajs teaches wherein generating the kernel of the point spread function comprises determining a size of the kernel based on the target resolution ([0089] The size of the blur kernel may be selected to reduce computation (e.g., by down-sampling) and also possibly in order to provide sufficient resolution for the depth estimation). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Geddes with the teachings of Wajs to determine the kernel size based on the target resolution "to reduce computation while still maintaining a finer resolution" [Wajs 0078]. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jacqueline R Zak whose telephone number is (571) 272-4077. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACQUELINE R ZAK/Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Aug 11, 2023
Application Filed
Aug 20, 2025
Non-Final Rejection — §103
Nov 23, 2025
Response Filed
Jan 13, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586340
PIXEL PERSPECTIVE ESTIMATION AND REFINEMENT IN AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12462343
MEDICAL DIAGNOSTIC APPARATUS AND METHOD FOR EVALUATION OF PATHOLOGICAL CONDITIONS USING 3D OPTICAL COHERENCE TOMOGRAPHY DATA AND IMAGES
2y 5m to grant Granted Nov 04, 2025
Patent 12373946
ASSAY READING METHOD
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
55%
With Interview (-11.4%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month