DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1-14 and 18-23 are currently pending in the present application, with claims 1, 18, and 23 being independent.
Response to Amendments / Arguments
Applicant’s arguments with respect to claim(s) 1-14 and 18-23 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant's arguments filed 12/25/2025 have been fully considered but they are not persuasive.
Applicant argues: Guo et al. (CN 104992442) does not disclose a “split line depth value”
Examiner replies: that Guo expressly discloses the average depth of the two white liens is dw (Par. 0095) and dw is the average depth of two reference lines (Par. 0098). Thus, Guo explicitly defines a depth value associated with the reference lines. The parameter dw is calculated from depth data corresponding to the reference lines and is used as a depth threshold in subsequent processing. Under BRI, the claims do not require that the line inherently “possess” depth. They require “Determining a split line depth value”, where Guo determines that dw represents the average depth of those lines
Applicant argues: Guo et al. (CN 104992442) does not disclose a “determining a depth split line corresponding to a current pixel point of a current depth view, and determining a pixel value of the current pixel point based on a pixel depth value of the current pixel point and a split line depth value of the depth split line”, and further asserts the Guo merely performs blur kernel scaling based on dp, and does not determine a pixel value based on both dp and a line depth value.
Examiner replies: that Guo discloses for each pixel p, depth value dp is obtained (Par. 0095), pixels with depth greater than dw are blurred, and the blur window increases with depth (Par. 0095-0098), and fuzzy window size is calculated based on dp relative to dw (Par. 0095). Under broadest reasonable interpretation, modifying a pixel’s value via depth-dependent filtering constitutes determining a pixel value based on both a pixel depth value and a split line depth value. Blurring modifies the pixel value, therefore, the output pixel value is determined based on the pixels depth value (dp) and the reference line depth value (dw), where the blur operation necessarily results in a modified pixel value. The claim does not require a particular transformation or formula to determine a pixel depth value, it encompasses modifying pixel output in response to depth comparison. Accordingly, Guo teaches “determining a depth split line corresponding to a current pixel point of a current depth view, and determining a pixel value of the current pixel point based on a pixel depth value of the current pixel point and a split line depth value of the depth split line”.
Regarding the remaining arguments: Applicant argues with respect to the amended claim language, which is fully addressed in the prior art rejections set forth below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-14, 18-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guo et al. (CN 104992442), hereinafter referred to as “Guo”, in view of Tam et al. (US 20070024614), hereinafter referred to as “Tam”.
Regarding claim 1, Guo discloses determining a depth view of a plurality of video frames of a video (Fig. 2 and Par. 0006; Step 1. Extract color frames and depth frames: Use the Kinect depth camera to obtain real-time input color frame Ic and depth frame Id sequence),
and determining a split line depth value (Par. 0019-020; take all the points 1 that satisfy the following formula (2) as candidate points for the reference line position…ds is the starting search reference line depth, de is the end of the search reference line depth. Par. 0058; the greater the degree of vision blur, the ds start searching for the depth of the reference line) of at least one depth split line corresponding to the video (Par. 0007-0009; Step 2. Deep frame stretching…Step 3. Divide the fine foreground mask…Step 4. Calculate the position of the reference line: Determine the reference lines Ileft and Iright in the left and right halves of the scene) based on the depth view (Par. 0015-0018; maximum depth and minimum depth in all the depth frames Id are calculated…and is mapped to a depth range (dmin, dmax) by using a linear transformation. The image Id', where d1>dmin, d2<dmax, is calculated as (1) …determine a virtual plane to enhance the foreground depth of field…top-view foreground Fv),
for the depth view, determining a depth split line (Par. 0023-0024; If k < 0, the general direction of the foreground motion is from the upper left corner to the lower right corner…the points satisfying formula (2) in the bird's-eye view image V is the candidate point of the reference lines Ileft and Iright…) corresponding to a current pixel point of a current depth view (Fig. 9 and Par. 0027; …the mapping expression of recasting the scene point Xc to the imaging plane point m is the following formula (5)), and determining a pixel value of the current pixel point (Par. 0031; …for all pixel points p in the new color frame Rc, Rd has its corresponding depth dp, and the fuzzy window size at point p is calculated…) based on a pixel depth value of the current pixel point (Par. 0031; …corresponding depth p…) and a split line depth value of the depth split line (Par. 0061; Based on the depth information and the breadth-first search BFS, a fine motion foreground Fp is segmented and the motion foreground in the three-dimensional scene is divided. The Fp is marked and projected onto the top view V. According to the movement trajectory characteristics of the motion foreground plan view V, two reference lines lleft and lright are calculated),
and determining a three-dimensional displaying video frame of the plurality of video frames of the video based on pixel values of a plurality of pixels in the depth view (Fig. 7-12 and Par. 0010-0013; Step 5. Apply the camera geometry principle to redraw the color frame Ic and the image Id' layer by layer to the new color frame Rc and the new depth frame Rd on the imaging plane…Step 7. Perform proper blurring on the distance scene…to obtain the resulting image Rcb…Step 8. Insert a reference line in the result image Rcb to obtain a result image Rcbp…The result image Rcbp obtained by inserting the result image Rcb complete the entire drawing process).
Guo does not disclose wherein the at least one depth split line is pre-set.
In the same art of stereoscopic and multiview imaging, Tam discloses wherein the at least one depth split line is pre-set (Par. 0053; selecting a value for zero-parallax setting (ZPS) between the nearest and farthest clipping planes of the depth map…).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Guo’s depth-based stereoscopic rendering to use Tam’s ZPS depth reference as a pre-set depth split line. Replacing a dynamically obtained depth reference with a selected zero-parallax depth setting is a predictable design alternative in depth driven stereoscopic rendering systems, that yields the expected benefit of stabilizing the depth boundary used for pixel classification and processing across frames.
Regarding claim 2, Guo discloses the method of claim 1, and further discloses before determining a depth view of a plurality of video frames of a video:
receiving the video (Par. 0006; Step 1. Extract color frames and depth frames: Use the Kinect depth camera to obtain real-time input color frame Ic and depth frame Id sequence),
setting at least one depth split line corresponding to the video (Fig. 6 and Par. 0009; Step 4. Calculate the position of the reference line: Determine the reference lines Ileft and Iright in the left and right halves of the scene), and determining a position and width of the at least one depth split line in the video (Fig. 6 and Par. 0020; ds is the starting search reference line depth, de is the end of the search reference line depth, and ds and de are set to the three equal divisions of the minimum circumscribed rectangle minBoundRect of the top view trajectory) based on a display parameter of the video (Par. 0022-0023; If k < 0, the general direction of the foreground movement is from the upper right corner to the lower left corner…If k < 0, the general direction of the foreground motion is from the upper left corner to the lower right corner…),
and wherein the display parameter is a display length and a display width of the video (Par. 0028; o-xyz coordinate system…f is the distance…width and height of the imaging plane) displayed on a display interface (Par. 0005; video stereoscopic rendering method for flat display devices).
Regarding claim 3, Guo discloses the method of claim 1, and further discloses wherein determining a depth view of a plurality of video frames of a video comprises:
determining an initial depth view (Par. 0006; extract color frames and depth frames…depth frame Id sequences) and initial feature points of the plurality of video frames (Par. 0008; Step 3. Divide the fine foreground mask: Apply the frame difference method to the adjacent color frame Ic to subtract the background to get the foreground Fr… For each frame, find the point with the smallest depth in the rough foreground of the frame and perform the breadth-first search to find continuity in the three-dimensional scene. Point, get a fine foreground mask Fp. Fig. 3-4; foreground mask and projection points),
obtaining a collection of 3D feature point pair set of two adjacent video frames by processing the initial feature points of the two adjacent video frames sequentially, wherein the collection of 3D feature point pairs comprises a plurality of sets of 3D feature point pairs (Par. 0008; For each frame, find the point with the smallest depth in the rough foreground of the frame and perform the breadth-first search to find continuity in the three-dimensional space…get a fine foreground mask Fp),
determining a camera motion parameter of the two adjacent video frames based on the plurality of sets of 3D feature point pairs of the collection of 3D feature point pairs (Par. 0010; Step 5. Apply the camera geometry principle to redraw the color frame Ic and the image Id' layer by layer to the new color frame Rc and the new depth frame Rd…divide into multiple layers, make camera geometry photography of the points…repair the cracks in the camera), and determining the camera motion parameter as a camera motion parameter of a preceding video frame among the two adjacent video frames (Par. 0010; Step 5. Apply the camera geometry principle to redraw the color frame Ic and the image Id' layer by layer to the new color frame Rc and the new depth frame Rd…divide into multiple layers, make camera geometry photography of the points…repair the cracks in the camera),
wherein the camera motion parameter comprises a rotation matrix and a displacement matrix (Par. 0010; camera geometry principle…camera geometry photography),
and determining the depth view of the plurality of video frames based on initial depth view of the plurality of video frames and the corresponding camera motion parameter (Par. 0015; the maximum depth and minimum depth in all the depth frames Id are calculated…and is mapped to a depth range).
Regarding claim 4, Guo discloses the method of claim 3, and further discloses wherein determining an initial depth view and initial feature points of the plurality of video frames comprises:
obtaining the initial depth view of the plurality of video frames by performing a depth estimation on the plurality of video frames (Par. 0007 and 0015; Step 2. Deep frame stretching: linear transformation of the depth frame, bilateral filtering to obtain the image Id'…the maximum depth and the minimum depth in all the depth frames Id are calculated…and is mapped to a depth range (dmin,dmax) by using a linear transformation…calculated as Formula (1)),
and determining the initial feature points of the plurality of video frames by processing the plurality of video frames based on a feature point detection algorithm (Par. 0008; apply the frame difference method…subtract the background…perform the breadth-first search to find continuity in the three-dimensional scene).
Regarding claim 5, Guo discloses the method of claim 3, and further discloses wherein obtaining a collection of 3D feature point pairs of two adjacent video frames by processing the initial feature points of the two adjacent video frames sequentially comprises:
obtaining at least one set of 2D feature point pairs associated with the two adjacent video frames by matching the initial feature points of the two adjacent video frames sequentially based on a feature point matching algorithm (Par. 0008; …divides the coarse motion foreground mask. For each frame, find the point with the smallest depth in the rough foreground of the frame and perform the breadth-first search to find continuity in the three-dimensional scene. Point, get a fine foreground mask Fp),
obtaining an original 3D point cloud corresponding to a initial depth view (paragraph ref/citation) and at least one set of 3D feature point pair corresponding to the at least one set of 2D feature point pair (paragraph ref/citation), by performing a 3D point cloud reconstruction to the initial depth view of the two adjacent video frames (Par. 0010; Step 5. Apply the camera geometry principle to redraw the color frame Ic and the image Id' layer by layer to the new color frame Rc and the new depth frame Rd…divide into multiple layers, make camera geometry photography of the points…repair the cracks in the camera. Finally, the repaired layer is drawn on the new color frame Rc and the new depth frame Rd. Fig. 7-9 showing projection and repairs),
and determining the collection of 3D feature point pairs of the two adjacent video frames based on the at least one set of 3D feature point pairs (Par. 0010; …redraw the color frame Ic and the image Id' layer by layer to the new color frame Rc and the new depth frame Rd…the repaired layer is drawn on the new color frame Rc and the new depth frame Rd).
Regarding claim 6, Guo discloses the method of claim 4 and further discloses wherein determining the depth view of the plurality of video frames based on the initial depth view of the plurality of video frames and the corresponding camera motion parameter comprises:
obtaining a to-be-used 3D point cloud of a current initial depth view for the initial depth view based on an original 3D point cloud (Par. 0010-0013; Par. 0010; Step 5. Apply the camera geometry principle to redraw the color frame Ic and the image Id' layer by layer…on the imaging plane for the points in the color frame Ic and the image Id…m The point is the point where Xc is projected on the camera imaging plane π), a rotation matrix (Par. 0010-0013; apply the camera geometry principle… divide into multiple layers, apply a camera perspective transformation), and a translation matrix of the current initial depth view (Par. 0013; apply a camera perspective transformation to the Loc),
and obtaining a depth view corresponding to all video frames (Par. 0013; Insert a reference line in the result image Rcb to obtain a result image Rcbp…completes the entire drawing process) based on the original 3D point cloud (Par. 0010; Step 5. Apply the camera geometry principle to redraw the color frame Ic and the image Id' layer by layer…on the imaging plane for the points in the color frame Ic and the image Id'), the to-be-used 3D point cloud (Par. 0027-0029; recasting the scene point Xc to the imaging plane point m…the inpainting algorithm is used to repair the cracks in layers in the point Ic and Id'), and a predetermined depth adjustment coefficient of the initial depth view (Par. 0031-0032; for all pixel points p in the new color frame Rc, Rd has its corresponding depth dp, and the fuzzy window size at point p is calculated according to dp WinSize…winmax is the corresponding fuzzy window at depthmax, and winmin is the corresponding fuzzy window at dw).
Regarding claim 7, Guo discloses the method of claim 1, and further discloses before determining a split line depth value of at least one depth split line corresponding to the video based on the depth view:
determining a significant object in the plurality of video frames (Par. 0008; …remove the small bright area, delete the small branch only retains the branch with the largest area and divides the coarse motion foreground mask ), and determining a initial mask map of a corresponding video frame based on the significant object (Par. 0008; foreground mask…Par. 0017-0020; specific operation process of step 4 is to determine a virtual plane to enhance the foreground depth of field, mark the foreground in the fine three-dimensional scene in the fine foreground mask Fp, and calculate the reference line position algorithm…), to determine the split line depth value based on the initial mask map of the plurality of video frames and the depth view (Par. 0017-0026; calculate the reference line position algorithm…candidate point of the reference lines lleft and lright…points that are preferentially close to l1 are prioritized…under same distance condition, the points with larger dpeth have priority…).
Regarding claim 8, Guo discloses the method of claim 7, and further discloses wherein determining a split line depth value of at least one depth split line corresponding to the video based on the depth view (Par. 0020; ds is the starting search reference line depth, de is the end of the search reference line depth…Par. 0058; the greater the degree of vision blur, the ds start searching for the depth of the reference line) comprises:
determining an average depth value (Par. 0012; the average depth of the reference lines lleft and lright is dw, and the blur the point with a depth greater than dw in the new color frame Rc to obtain the resulting image Rcb, and the greater the depth, the greater the blurring window) of a mask area in initial mask map for the plurality of video frames, based on a initial mask map and a depth view of a current video frame (Par. 0008; divide the fine foreground mask…divide the coarse motion foreground mask… fine foreground mask Fp.),
and determining the split line depth value of at least one depth split line based on the average depth value of the plurality of video frames and a predetermined split line adjustment coefficient (Par. 0031-0032; …for all pixel points p in the new color frame Rc, Rd has its corresponding depth dp, and the fuzzy window size at point p is calculated according to dp WinSize…depthmax is the maxmimum depth in the scene, dw is the average depth of two reference lines, dp is p-dot depth, winmax is the corresponding fuzzy window at depthmax, and winmin is the corresponding fuzzy window at dw).
Regarding claim 9, Guo discloses the method of claim 8, and further discloses wherein determining an average depth value (dw) of a mask area in the initial mask map based on a initial mask map (Par. 0018; fine foreground mask Fp) and a depth view of a current video frame (Par. 0018; top-view foreground Fv) comprises:
in presence of the initial mask map corresponding to the current video frame,
determining initial depth values of a plurality of initial pixels of the mask area of the depth view (Par. 0056; Fp is a fine foreground mask segmented by combining the depth math and the BFS algorithm on the basis of Fr, Fv is the foreground top view of Fp projection on the top view angle, and V is the foreground topography trace obtained by superimposing the Fv non-zero pixels of all frames),
and determining the average depth value (Par. 0031; dw) of the mask area based on to-be-displayed depth values of a plurality of pixels and a plurality of initial depth values in the depth view (Par. 0020; p is the pixel point in the set P, represents the trajectory through which the foreground passes, ds is the starting search reference line depth, de is the end of the search reference line depth, and ds and de are set to the three equal divisions of the minimum circumscribed rectangle minBoundRect of the top view trajectory) or,
in absence of the initial mask map corresponding to the current video frame, determining an average depth value of the current video frame based on a recorded average depth value for the plurality of video frames (Par. 0058; The greater the degree of vision blur, the ds start searching for the depth of the reference line. Generally, the depth of the reference line is set to 30% of the minimum circumscribed rectangle height of the trajectory, and is generally set to 70% of the minimum circumscribed rectangle height of the trajectory).
Regarding claim 10, Guo discloses the method of claim 8, and further discloses wherein determining the split line depth value of at least one depth split line based on the average depth value of the plurality of video frames and a predetermined split line adjustment coefficient comprises:
determining a maximum (Par. 0032; depthmax is the maximum depth of the scene) and minimum value (Par. 0058; lower limit of the depth value…dmin) of the average depth value based on the average depth value of the plurality of video frames (Par. 0012; depth greater than dw in the new color frame Rc to obtain the resulting image Rcb, and the greater the depth, the greater the blurring window),
and determining the split line depth value (Par. 0020; ds is the starting search reference line depth, de is the end of the search reference line depth…Par. 0058; the greater the degree of vision blur, the ds starts searching for the depth of the reference line) of at least one depth split line (Par. 0032; reference lines) based on the minimum value (Par. 0058; lower limit of the depth value…dmin), the split line adjustment coefficient (Par. 0031; dp WinSize), and the maximum value (Par. 0032; depthmax).
Regarding claim 11, Guo discloses the method of claim 10, and further discloses wherein the at least one depth split line comprises a first depth split line (Par. 0012; reference line lleft) and a second depth split line (Par. 0012; reference line lright), the predetermined split line adjustment coefficient (Par. 0031; dp WinSize) comprises a first split line adjustment coefficient (Par. 0032; winmax) and a second split line adjustment coefficient (Par. 0032; winmin),
and determining the split line depth value (Par. 0020; ds is the starting search reference line depth, de is the end of the search reference line depth…Par. 0058; the greater the degree of vision blur, the ds starts searching for the depth of the reference line) of at least one depth split line (Par. 0032; reference lines) based on the minimum value (Par. 0058; lower limit of the depth value…dmin), the split line adjustment coefficient (Par. 0031; dp WinSize), and the maximum value (Par. 0032; depthmax is the maximum depth of the scene) comprises: determining a first split line depth value of the first depth split line (Fig. 6 and Par. 0020; ds is the starting search reference line depth, de is the end of the search reference line depth, and ds and de are set to the three equal divisions) and a second split line depth value of the second depth split line (Fig. 6 and Par. 0020; ds is the starting search reference line depth, de is the end of the search reference line depth, and ds and de are set to the three equal divisions) based on the minimum value, the first split line adjustment coefficient, the second split line adjustment coefficient, and the maximum value (Par. 0031-0032; Rd has its corresponding depth dp, and the fuzzy window size at point p is calculated according to dp WinSize… depthmax is the maximum depth in the scene, dw is the average depth of two reference lines, dp is the p-dot depth, winmax is the corresponding fuzzy window at depthmax, and winmin is the corresponding fuzzy window at dw).
Regarding claim 12, Guo discloses the method of claim 1, and further discloses wherein determining a depth split line corresponding to a current pixel point of a current depth view comprises:
determining, based on position information of the current pixel point (Par. 0019-0020; points that satisfy the following formula (2) as candidate points for the reference line position…p is the pixel point in the set P) and a position (Par. 0013; the position of the two reference lines calculated in step 4 as Loc) and width of the at least one depth split line (reference lines), whether the current pixel point is located on the at least one depth split line (Formula (2)),
and in accordance with a determination that the current pixel point is located on the at least one depth split line, determining a depth split line comprised the current pixel point as the depth split line. (Par. 0019; Take all the points that satisfy the following formula (2) as candidate points for the reference line position…Fig. 5-6 showing motion foreground trajectory and line positions. Par. 0024; …point satisfying formula (2) …V is the candidate point of the reference lines lleft and lright…).
Regarding claim 13, Guo discloses the method of claim 1, and further discloses wherein determining a pixel value of the current pixel point based on a pixel depth value of the current pixel point and a split line depth value of the depth split line comprises:
determining the pixel value of the current pixel (Par. 0031; …for all pixel points p in the new color frame Rc, Rd has its corresponding depth dp, and the fuzzy window size at point p is calculated…) based on the pixel depth value of the current pixel point (Par. 0031; for all pixel points p in the new color frame Rc …corresponding depth dp…) the split line depth value (Par. 0019-020; take all the points 1 that satisfy the following formula (2) as candidate points for the reference line position…ds is the starting search reference line depth, de is the end of the search reference line depth. Par. 0058; the greater the degree of vision blur, the ds starts searching for the depth of the reference line), and the initial mask map of the video frame to which the current pixel belongs (Par. 0056; Fp is a fine foreground mask segmented by combining the depth math and the BFS algorithm on the basis of Fr, Fv is the foreground top view of Fp projection on the top view angle, and V is the foreground topography trace obtained by superimposing the Fv non-zero pixels of all frames).
Regarding claim 14, Guo discloses the method of claim 13, and further discloses wherein determining the pixel value of the current pixel based on the pixel depth value of the current pixel point, the split line depth value, and the initial mask map of the video frame to which the current pixel belongs comprises:
in accordance with a determination that a pixel depth value of the current pixel point (Par. 0031; for all pixel points p in the new color frame Rc …corresponding depth dp…) is lower than the split line depth value (Par. 0012; average depth of reference lines…dw), and that the current pixel point is located in a mask area of the initial mask map (Par. 0008; fine foreground mask Fp), maintaining an original pixel value of the current pixel point, and determining the original pixel value as the pixel value (Par. 0012; blur the point with a depth greater than dw…the greater the depth, the greater the blurring window. Par. 0095; blurring the points in Rc that have a depth greater than dw. Examiner's interpretation: dw is the split line depth, and if pixels depth value is less than dw, then it will remain unchanged),
and in accordance with a determination that the pixel depth value of the current pixel point (Par. 0031; for all pixel points p in the new color frame Rc …corresponding depth dp…) is greater than the split line depth value (Par. 0012; average depth of reference lines…dw), and that the current pixel point is located in a mask area of the initial mask map (Par. 0008; fine foreground mask Fp, adjusting the original pixel value of the current pixel point to a first predetermined pixel value, and determining the first predetermined pixel value as the pixel value of the current pixel point (Par. 0012; blur the point with a depth greater than dw…the greater the depth, the greater the blurring window. Par. 0095; blurring the points in Rc that have a depth greater than dw).
Regarding claim 18, Guo discloses determining a depth view of a plurality of video frames of a video (Fig. 2 and Par. 0006; Step 1. Extract color frames and depth frames: Use the Kinect depth camera to obtain real-time input color frame Ic and depth frame Id sequence), and determining a split line depth value (Par. 0019; take all the points 1 that satisfy the following formula (2) as candidate points for the reference line position) of at least one depth split line corresponding to the video (Par. 0007-0009; Step 2. Deep frame stretching…Step 3. Divide the fine foreground mask…Step 4. Calculate the position of the reference line: Determine the reference lines Ileft and Iright in the left and right halves of the scene) based on the depth view (Par. 0015; maximum depth and minimum depth in all the depth frames Id are calculated…and is mapped to a depth range (dmin, dmax) by using a linear transformation. The image Id', where d1>dmin, d2<dmax, is calculated as (1)),
for the depth view, determining a depth split line (Par. 0023-0024; If k < 0, the general direction of the foreground motion is from the upper left corner to the lower right corner…the point satisfying formula (2) in the bird's-eye view image V is the candidate point of the reference lines Ileft and Iright…) corresponding to a current pixel point of a current depth view (Fig. 9 and Par. 0027; …the mapping expression of recasting the scene point Xc to the imaging plane point m is the following formula (5)), and determining a pixel value of the current pixel point (Par. 0031; …for all pixel points p in the new color frame Rc, Rd has its corresponding depth dp, and the fuzzy window size at point p is calculated…) based on a pixel depth value of the current pixel point (Par. 0031; …corresponding depth p…Fig. 10; color frame Rc and a depth frame Rd after the missing pixel is repaired) and a split line depth value of the depth split line (Par. 0020-0026;…under the same distance condition, the points with larger depth have priority…see Fig. 6 for Ileft and Iright reference line segmentation),
and determining a three-dimensional displaying video frame of the plurality of video frames of the video based on pixel values of a plurality of pixels in the depth view (Fig. 7-12 and Par. 0010-0013; Step 5. Apply the camera geometry principle to redraw the color frame Ic and the image Id' layer by layer to the new color frame Rc and the new depth frame Rd on the imaging plane…Step 7. Perform proper blurring on the distance scene…to obtain the resulting image Rcb…Step 8. Insert a reference line in the result image Rcb to obtain a result image Rcbp…The result image Rcbp obtained by inserting the result image Rcb complete the entire drawing process).
Guo does not appear to explicitly disclose an electronic device, comprising: a processor, and a storage apparatus storing a program, wherein the program, when executed by the processor, causes the processor to perform the method of video image processing comprising:
In the same art of stereoscopic and Multiview imaging, Tam discloses an electronic device (Fig. 4 and Par. 0004; 3D or multiview imaging devices), comprising: a processor (Fig. 4; DIBR processor 23), and a storage apparatus storing a program, wherein the program, when executed by the processor, causes the processor to perform the method of video image processing comprising (Fig. 4; data receiver 25 and Par. 0005; storage and transmission)
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, that the disclosed stereoscopic rendering method of Guo must be performed by a computer system having a processor and storage medium as taught by Tam, being standard and practical means of implementing video and image processing. The motivation lies in the predictable result of automated execution of methods in computing devices. Applying a known technique, such as a Tam’s computer execution of algorithms, to a known method, such as Guo’s stereoscopic video rendering, would yield predictable results (KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398,417 (2007)).
Guo does not disclose wherein the at least one depth split line is pre-set.
In the same art of stereoscopic and Multiview imaging, Tam discloses wherein the at least one depth split line is pre-set (Par. 0053; selecting a value for zero-parallax setting (ZPS) between the nearest and farthest clipping planes of the depth map…).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Guo’s depth-based stereoscopic rendering to use Tam’s ZPS depth reference as a pre-set depth split line. Replacing a dynamically obtained depth reference with a selected zero-parallax depth setting is a predictable design alternative in depth driven stereoscopic rendering systems, that yields the expected benefit of stabilizing the depth boundary used for pixel classification and processing across frames.
Regarding claim 19, claim 19 is substantially equivalent to the limitation of claim 2, except for the additional limitation of the electronic device, discussed in claim 18, comprising a processor and storage apparatus storing a program (Tam Fig. 4), and therefore is rejected under the same rationale in claim 2.
Regarding claim 20, claim 20 is substantially equivalent to the limitation of claim 3, except for the additional limitation of the electronic device, discussed in claim 18, comprising a processor and storage apparatus storing a program (Tam Fig. 4), and therefore is rejected under the same rationale in claim 3.
Regarding claim 21, claim 21 is substantially equivalent to the limitation of claim 4, except for the additional limitation of the electronic device, discussed in claim 18, comprising a processor and storage apparatus storing a program (Tam Fig. 4), and therefore is rejected under the same rationale in claim 4.
Regarding claim 22, claim 22 is substantially equivalent to the limitation of claim 5, except for the additional limitation of the electronic device, discussed in claim 18, comprising a processor and storage apparatus storing a program (Tam Fig. 4), and therefore is rejected under the same rationale in claim 5.
Regarding claim 23, Guo discloses determining a depth view of a plurality of video frames of a video (Fig. 2 and Par. 0006; Step 1. Extract color frames and depth frames: Use the Kinect depth camera to obtain real-time input color frame Ic and depth frame Id sequence)
and determining a split line depth value (Par. 0019; take all the points 1 that satisfy the following formula (2) as candidate points for the reference line position) of at least one depth split line corresponding to the video (Par. 0007-0009; Step 2. Deep frame stretching…Step 3. Divide the fine foreground mask…Step 4. Calculate the position of the reference line: Determine the reference lines Ileft and Iright in the left and right halves of the scene) based on the depth view (Par. 0015; maximum depth and minimum depth in all the depth frames Id are calculated…and is mapped to a depth range (dmin, dmax) by using a linear transformation. The image Id', where d1>dmin, d2<dmax, is calculated as (1)),
for the depth view, determining a depth split line (Par. 0023-0024; If k < 0, the general direction of the foreground motion is from the upper left corner to the lower right corner…the point satisfying formula (2) in the bird's-eye view image V is the candidate point of the reference lines Ileft and Iright…) corresponding to a current pixel point of a current depth view (Fig. 9 and Par. 0027; …the mapping expression of recasting the scene point Xc to the imaging plane point m is the following formula (5)), and determining a pixel value of the current pixel point (Par. 0031; …for all pixel points p in the new color frame Rc, Rd has its corresponding depth dp, and the fuzzy window size at point p is calculated…) based on a pixel depth value of the current pixel point (Par. 0031; …corresponding depth p…Fig. 10; color frame Rc and a depth frame Rd after the missing pixel is repaired) and a split line depth value of the depth split line (Par. 0020-0026;…under the same distance condition, the points with larger depth have priority…see Fig. 6 for Ileft and Iright reference line segmentation),
and determining a three-dimensional displaying video frame of the plurality of video frames of the video based on pixel values of a plurality of pixels in the depth view (Fig. 7-12 and Par. 0010-0013; Step 5. Apply the camera geometry principle to redraw the color frame Ic and the image Id' layer by layer to the new color frame Rc and the new depth frame Rd on the imaging plane…Step 7. Perform proper blurring on the distance scene…to obtain the resulting image Rcb…Step 8. Insert a reference line in the result image Rcb to obtain a result image Rcbp…The result image Rcbp obtained by inserting the result image Rcb complete the entire drawing process).
Guo does not appear to explicitly disclose a non-transitory storage medium comprising computer-executable instructions, the computer-executable instructions, when executed by a computer processor, performing the method of video image processing comprising:
In the same art of stereoscopic and Multiview imaging, Tam discloses a non-transitory storage medium comprising computer-executable instructions, the computer-executable instructions (Fig. 4), when executed by a computer processor, performing the method of video image processing comprising (Fig. 4):
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, that the disclosed stereoscopic rendering method of Guo must be performed by a computer system having a processor and storage medium as taught by Tam, being standard and practical means of implementing video and image processing. The motivation lies in the predictable result of automated execution of methods in computing devices. Applying a known technique, such as a Tam’s computer execution of algorithms, to a known method, such as Guo’s stereoscopic video rendering, would yield predictable results (KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398,417 (2007)).
Guo does not disclose wherein the at least one depth split line is pre-set.
In the same art of stereoscopic and Multiview imaging, Tam discloses wherein the at least one depth split line is pre-set (Par. 0053; selecting a value for zero-parallax setting (ZPS) between the nearest and farthest clipping planes of the depth map…).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Guo’s depth-based stereoscopic rendering to use Tam’s ZPS depth reference as a pre-set depth split line. Replacing a dynamically obtained depth reference with a selected zero-parallax depth setting is a predictable design alternative in depth driven stereoscopic rendering systems, that yields the expected benefit of stabilizing the depth boundary used for pixel classification and processing across frames.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENNY NGAN TRAN whose telephone number is (571)272-6888. The examiner can normally be reached Mon-Thurs 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JENNY N TRAN/Examiner, Art Unit 2615
/ALICIA M HARRINGTON/Supervisory Patent Examiner, Art Unit 2615