DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant’s response of 1/15/2026 has been received and entered. In the response, Applicant elected Species II, corresponding to at least Claims 1, and 6-12. Claims 2-5 and 13-20 stand non-elected without traverse.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 6-7, and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang (PGPUB: 20150002627 A1) in view of Achaibou (PGPUB: 20230260143 A1).
Regarding claim 1. An apparatus for storing depth information, the apparatus comprising:
at least one memory; and at least one processor coupled to the at least one memory (see Fig. 1, item60 and 56) and configured to:
obtain depth information for depth pixels of a depth map (see Fig. 8, paragraph 94, device 10 has captured an image or left view and obtained a corresponding image depth map. The image pair generation module 42 uses an image pair generation process 142 which will now be described. At step S144 the left view is obtained and its corresponding image depth map from the depth detection process 130 or 132 at step S146 is obtained); and
store the respective plurality of depth values for each depth pixel of the depth map (see paragraph 75, for an image depth map, the pixel depth value of every pixel is stored by the depth detection process 130).
Wang does not expressly teach to determine, based on the depth information, a respective plurality of depth values for each depth pixel of the depth map.
Achaibou teaches that the controller 130 also determines depth information using image data from the camera assembly 120. For instance, the controller 130 can generate depth images from the image data. A depth image includes a plurality of depth pixels. Each depth pixel has a value corresponding to an estimated depth, e.g., an estimated distance from a locus of the object 140 to the image sensor 195. A single depth image may also be referred to as a depth frame or a depth map (see Fig. 1, paragraph 41).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang by Achaibou to obtain to determine depth information using image data from the camera assembly 120. For instance, the controller 130 can generate depth images from the image data. A depth image includes a plurality of depth pixels. Each depth pixel has a value corresponding to an estimated depth, in order to provide determine, based on the depth information, a respective plurality of depth values for each depth pixel of the depth map. Therefore, combining the elements from prior arts according to known methods and technique would yield predictable results.
Regarding claim 6. The apparatus of claim 1, wherein the respective plurality of depth values of a depth pixel of the depth map comprise:
a first depth value based on depth information corresponding to the depth pixel (see Achaibou, Fig. 4, paragraph 57, The depth module 440 is configured to generate depth images indicative of distance to the object 140 being imaged, e.g., based on digital signals indicative of charge accumulated on the image sensor 195. The depth module 440 may analyze the digital signals to determine a phase shift exhibited by the light (e.g., the phase shift φ described above in conjunction with FIG. 2) to determine a ToF (e.g., the ToF δ described above in conjunction with FIG. 2) of the light and further to determine a depth value (e.g., the distance d described above in conjunction with FIG. 2) of the object 140);
a second depth value based on the depth information corresponding to the depth pixel (see Achaibou, Fig. 4, paragraph 76, the optimization module 485 executes an optimization process, in which the optimization module 485 optimizes fusion energy to determine enhanced depth values. The optimization module 485 optimizes the fusion energy by minimizing the image fusion energy. The optimization process may be performed on a pixel level, i.e., the optimization module 485 minimizes the fusion energy for a pixel to determine the enhanced depth value of the pixel); and
a third depth value based on the depth information corresponding to the depth pixel (see Wang, paragraph 77, The artifacts reduction process 131A, consists of two steps, as best illustrated in FIGS. 7A and 7B. In the first step, the depth value of the corner points A, B, C, and D of each block in FIG. 6C is found during the autofocusing process 124, and the depth value would be the average value of its neighboring blocks as shown in FIG. 7A where the depth of the middle point d is defined by equation Eq. (1), d = (d 1 + d 2 + d 3 + d 4)/ 4).
Regarding claim 7. The apparatus of claim 6, wherein the third depth value is based on at least one of:
an average of depth values of the depth information corresponding to the depth pixel (see Achaibou, Fig. 5, paragraph 82, a new depth value of the pixel is determined based on depth values of the other pixels in the box. For instance, the new depth value of the pixel may be an average of the depth values of the other pixels. The new depth values of the pixels are used to generate the depth enhanced image 530);
a median of the depth values of the depth information corresponding to the depth pixel; or
a likely depth value based on the depth values of the depth information corresponding to the depth pixel (see, Fig. 1 and 10, a Achaibou, paragraph 103, the controller 130 updates, in 1040, the depth image by assigning the depth value to the target depth pixel. The controller 130 may generate an enhanced depth image based on the depth value. The enhanced depth image represents better depth estimation than the depth image, as the depth value of the target depth pixel in the enhanced depth image has a better accuracy than the depth value of the target depth pixel in the depth image).
Regarding claim 10. The apparatus of claim 6, wherein the third depth value is based on at least one of:
a reflectivity of a point in a scene corresponding to the depth pixel;
an opacity of the point in the scene corresponding to the depth pixel (see Wang, paragraph 95, the image pair generation process 142 first assumes the obtained image is the left view at step S144 of the stereoscopic system alternately, the image could be considered the right view. Then, based on the image depth map obtained at step S146, a disparity map (the distance in pixels between the image points in both views) for the image is calculated at step S148 in the disparity map sub-module 44);
a pixel coverage between a foreground and a background;
a count of a mode of depth values of the depth information corresponding to the depth pixel; or
a confidence related to one or both of the first depth value and the second depth value.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang (PGPUB: 20150002627 A1) in view of Achaibou (PGPUB: 20230260143 A1), and further in view of DESAI (PGPUB: 20230128756 A1).
Regarding claim 8. The combination teaches the apparatus of claim 6, wherein the first depth value and the second depth value for the respective plurality of depth values of the depth pixel (see Achaibou, paragraph 135, determining a boundary weight for a target depth pixel of the plurality of depth pixels based on a gradient magnitude of the target depth pixel in the depth image and a gradient magnitude of a brightness pixel in a brightness image; determining an energy for the target depth pixel based on the boundary weight; determining a new depth value of the target depth pixel by optimizing the energy; and updating the depth image by assigning the new depth value to the target depth pixel).
The combination does not expressly teach that depth value represents a confidence interval.
DESAI teaches that each key point in each camera image has a depth map value for it. The system may use those depth map values to produce a confidence interval for the depth of that key point correspondence across the images. This confidence interval will then provide an indication of how “stable” the depth map estimate is for the key point correspondence (see paragraph 61).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination by DESAI to obtain that The system may use those depth map values to produce a confidence interval for the depth of that key point correspondence across the images, in order to provide depth value represents a confidence interval as taught by DESAI. Therefore, combining the elements from prior arts according to known methods and technique would yield predictable results.
Claim(s) 9 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang (PGPUB: 20150002627 A1) in view of Achaibou (20230260143 A1), and further in view of Xi (CN 105374019 A).
Regarding claim 9. The combination does not expressly teach the apparatus of claim 6, wherein:
the first depth value is based on a minimum depth value of depth values of the depth information corresponding to the depth pixel; and the second depth value is based on a maximum depth value of the depth values of the depth information corresponding to the depth pixel.
Xin teaches that any projection point in the N-1 projection point, determining the image area in the image area of the maximum depth value and a minimum depth value;
obtaining the difference value of the maximum depth value and the minimum depth value, wherein when the difference threshold of the difference value is more than the preset, then making the difference equal to the difference threshold value (see page 8, lines 1-5).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination by Xin to obtain that any projection point in the N-1 projection point, determining the image area in the image area of the maximum depth value and a minimum depth value; obtaining the difference value of the maximum depth value and the minimum depth value, wherein when the difference threshold of the difference value is more than the preset, then making the difference equal to the difference threshold value, in order to provide the first depth value is based on a minimum depth value of depth values of the depth information corresponding to the depth pixel; and the second depth value is based on a maximum depth value of the depth values of the depth information corresponding to the depth pixel as taught by Xi. Therefore, combining the elements from prior arts according to known methods and technique would yield predictable results.
Regarding claim 11. The combination does not expressly teach the apparatus of claim 6, wherein:
the first depth value represents a foreground depth; and the second depth value represents a background depth.
Xi teaches that combining the first aspect, or the first to four possible realization method of any one of the first aspect, in a fifth possible implementation in, judging whether the prospect in the three-dimensional space point depth value under the respective coordinate system of the N-1 image collecting unit, is greater than or equal to the depth value of the N-1 projection point corresponding to the first foreground pixel point (see page 7, lines 8-12); obtaining the first pixel point i in the depth image, the depth value of the first pixel point is less than or equal to the preset depth threshold, and corresponding to the first pixel point colour value of pixel point in the ith piece of color image is not equal to the ith color image background color value, the first pixel point as the first foreground pixel point, the ith color image for the ith image collecting unit collects the image obtained (see page 14, lines 5-9).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination by Xi to obtain whether the prospect in the three-dimensional space point depth value under the respective coordinate system of the N-1 image collecting unit, is greater than or equal to the depth value of the N-1 projection point corresponding to the first foreground pixel point and the depth value of the first pixel point is less than or equal to the preset depth threshold, and corresponding to the first pixel point colour value of pixel point in the ith piece of color image is not equal to the ith color image background color value, in order to provide the first depth value represents a foreground depth; and the second depth value represents a background depth as taught by Xi. Therefore, combining the elements from prior arts according to known methods and technique would yield predictable results.
Allowable Subject Matter
Claim 12 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIN JIA whose telephone number is (571)270-5536. The examiner can normally be reached 9:00 am-7:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at (571)272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XIN JIA/Primary Examiner, Art Unit 2663