Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 112(b) The following is a quotation of 35 U.S.C. 112(b): (b ) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the appl icant regards as his invention. Claim s 4 , 7, 11, 14, and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 4 recites the limitation " The information processing device according to Claim 3, wherein the at least one processor determines the first depth condition based on depth information of a pixel corresponding to the first region in the depth image. " . However, Claim 3 refers to this first region as a “ first region in the color im age”. Claim 3 states “pixels of the color image are mapped to pixels of the depth image, wherein the at least one processor identifies a first region in the color image, color information of a pixel in the first region satisfying a first color condition related to color of the detection target” . Therefore, while there is some depth information which exists in prior claims, the color image contains the first region and the depth image contains the second region. It does not appear the mapping of pixels of the color image to pixels of the depth image would necessarily modify the first or second regions or their respective images themselves but would rather simply provide a map between them. Therefore, Claim 4 fails to distinctly claim the subject matter which the inventor(s) regard as the invention. Claim 7 recites the limitation “The information processing device according to Claim 6 wherein the at least one processor determines a width of the predetermined range based on a size of a region corresponding to the third region in the depth image” . Specifically, Claim 7 appears to state a contradictory limitation from itself where the predetermined range is in fact not predetermined but is rather to be determined “ based on a size of a region corresponding to the third region in the depth image”. It is unclear as to what of this range could be predetermined when “ one processor determines a width of the predetermined range based on a size of a region corresponding to the third region in the depth image”, as the width of a range along with its proposed center are typically the only features needing determining to “predetermine” a range . Finally, the specification describes different embodiments where for example the width as well is also predetermined and therefore it is difficult to discern what natures of this range are predetermined and which are determined (Specification Paragraph 0036 states: “ The first depth condition may be determined by the CPU 11 based on the depth information of the pixels corresponding to the first region R1 in the depth image 41 identified in step S204. For example, the region having the largest area in the first region R1 may be identified, and a depth range of a predetermined width centered on the representative value (average, median, or the like) of the depth of the region corresponding to that region in the depth image 41 may be set to the first depth range ”) . Therefore, Claim 7 fails to distinctly claim the subject matter which the inventor(s) regard as the invention. Claim 1 1 is rejected for containing similar limitations to Claim 4 described above. Claim 14 is rejected for containing similar limitations to Claim 7 described above. Claim 1 8 is rejected for containing similar limitations to Claim 4 described above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim s 1 - 20 are rejected under 35 U.S.C. 103 as being unpatentable over Vit et al (“ Comparing RGB-D Sensors for Close Range Outdoor Agricultural Phenotyping ” ) and further in view of Jung ( U S Publication No. 20170069071 A1 ) . Regarding Claim 1, Vit discloses An information processing device comprising: at least one processor ( Note “ three computer s ” in Section 3.3 Procedure where three different computers which by definition would each contain at least one processor are used connected to various RGB-D cameras such as Kinect II) that acquires color information and depth information from an image ( Note “RGB-D Sensors” in Section 2.3 Comparing RGB-D Sensors to use in the experiment where an RGB-D sensor acquires RGB or color information and depth information and one of these sensors mentioned being used is the Kinect II ) but does not necessarily disclose of a subject captured by at least one camera, the depth information being related to a distance from the at least one camera to the subject, and detects a detection target based on the color information and the depth information that have been acquired, the detection target being at least a part of the subject in the image. Applicant’s specification in paragraph 0007 states plainly “an operator 70 (subject) with the hand 71 (detection target)” and therefore the closest reference to teach such subject-focused limitations should also use a person as the subject such as Jung below. Instead, Jung teaches color information and depth information from an image of a subject captured by at least one camera (Reference “data input unit 10”, see Specification paragraph 0045 where the data input unit receives an RGB image and a depth image from cameras) , the depth information being related to a distance from the at least one camera to the subject (Reference “Person Region extractor 40”, see Specification paragraph 0040 where specifically the depth and 3D distance will be used in person or subject extraction from the RGB-D image to extract the final detection target a person ) , and detects a detection target based on the color information and the depth information that have been acquired, the detection target being at least a part of the subject in the image (Reference “person region extractor 40”, see Specification paragraph 0073 where pixels of color are extracted which correspond to these grouped depth values and the person extracted in Figure 2H is the final detection target ) . Jung also teaches the motivation to improve the ability to separate a person from space regardless of lighting (See Specification paragraph 0107). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Vit in view of Jung . Regarding Claim 2, Vit discloses The information processing device according to claim 1, but fails to disclose . Instead, Jung teaches wherein the image includes multiple images (Note “images” in Specification paragraph 0043 where the images are also shown in Figures 2A and 2B) , and wherein the multiple images include a color image that includes the color information and a depth image that includes the depth information (Note “color image” and “depth image” in Specification paragraph 0043 which describes Figures 2A and 2B as color and depth images respectively as the multiple images) . Jung also teaches the motivation to improve the ability to separate a person from space regardless of lighting (See Specification paragraph 0107). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Vit in view of Jung . Regarding Claim 3, Vit discloses The information processing device according to claim 2 , but fails to disclose wherein, in an overlapping range where an imaging area of the color image and an imaging area of the depth image overlap, pixels of the color image are mapped to pixels of the depth image wherein the at least one processor identifies a first region in the color image, color information of a pixel in the first region satisfying a first color condition related to color of the detection target, identifies a second region in the depth image, depth information of a pixel in the second region satisfying a first depth condition related to a distance from the at least one camera to the detection target, and detects a region including a third region in the overlapping range as the detection target, the third region overlapping both a region corresponding to the first region and a region corresponding to the second region. Instead, Jung discloses wherein, in an overlapping range where an imaging area of the color image and an imaging area of the depth image overlap, pixels of the color image are mapped to pixels of the depth image (Note “the image of FIGS. 2A and 2B are matched” in Specification paragraph 0 063 where once the pictures are acquired, the background is removed and the pictures are matched or mapped to each other. Finally, note in Figures 2A-2C the regions overlap and the area segmented in 2C specifically is entirely overlapped by both 2A and 2B images . Also see paragraph 0052 describing specifically the pixel by pixel mapping of the color and depth images to each other ) , wherein the at least one processor identifies a first region in the color image (See Specification paragraph 00 63 and Figures 2A-2H stepping through this process in total . Now note Figure 2D which returns to the color domain and paragraph 0063 describing the foreground extraction which uses “a difference in RGB image” ) , color information of a pixel in the first region satisfying a first color condition related to color of the detection target (See “color distributions” in Specification paragraph 0059 and 0060 where a foreground region is extracted based on color distributions or color data related to the foreground and a bounding box is generated including the contour of the foreground image. This is shown in Figures 2D and 2E ) , identifies a second region in the depth image (Reference “ROI extractor 20”, see paragraph 0065 where the skeleton information from the bounding box is matched.) , depth information of a pixel in the second region satisfying a first depth condition related to a distance from the at least one camera to the detection target ( See paragraph 0052 describing specifically the pixel by pixel mapping of the color and depth images to each other. See R eference “S350 and S360” in paragraph 0093 and see Figure 2F which shows the skeleton matched. In S350, skeleton information is extracted and then matched to the 3D position to result in Figure 2F or S360. This region is extracted as the ROI which is passed to the depth corrector ) , and detects a region including a third region in the overlapping range as the detection target, the third region overlapping both a region corresponding to the first region and a region corresponding to the second region ( See Specification paragraph 0066 describing the matching of the final model as the final 3D person region which is estimated to contain the person. This region is the Region of Interest or ROI which may be further passed to a depth information corrector ) . Jung also teaches the motivation to improve the ability to separate a person from space regardless of lighting (See Specification paragraph 0107). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Vit in view of Jung . Regarding Claim 4, Vit discloses The information processing device according to claim 3, but fails to disclose wherein the at least one processor determines the first depth condition based on depth information of a pixel corresponding to the first region in the depth image . Instead, Jung discloses wherein the at least one processor determines the first depth condition based on depth information of a pixel corresponding to the first region in the depth image ( Examiner’s Note: Please refer to the rejection under U.S.C. 112(b) for additional details regarding the interpretation of Claim 4 . Specifically noting the first region is in the color image and a potential lack of clarity from what actual image or region this pixel is located and to what image or region it corresponds to. It is currently assumed the pixel might merely correspond to the first region in the sense they were once mapped to one another providing a relationship form first region to the depth image . Returning to Jung, see Specification paragraph 0065 where the ROI extractor as mentioned above utilized depth information and also as noted in paragraph 0052 where these images had previously been mapped pixel by pixel to each other and thus are corresponded to each other before the determining of the first depth condition . Further, see Figures 2C and 2D showing the depth and color regions corresponding to each other in respective domains ) . Jung also teaches the motivation to improve the ability to separate a person from space regardless of lighting (See Specification paragraph 0107). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Vit in view of Jung . Regarding Claim 5, Vit discloses The information processing device according to claim 3, but fails to disclose wherein the at least one processor determines a second depth condition based on depth information of a pixel corresponding to the third region in the depth image, identifies a fourth region in the first region of the color image, the fourth region corresponding to a region in the depth image where depth information of a pixel of the fourth region satisfying the second depth condition, and detects a region including the third region and a region corresponding to the fourth region in the color image in the overlapping range as the detection target. Instead, Jung discloses wherein the at least one processor determines a second depth condition based on depth information of a pixel corresponding to the third region in the depth image , identifies a fourth region in the first region of the color image, the fourth region corresponding to a region in the depth image where depth information of a pixel of the fourth region satisfying the second depth condition (Reference “person region extractor 40” and “depth information corrector 30” in Specification paragraph 007 2 wh ere the person region extractor receive s the RGB-D image from the depth information corrector 30 and divide s the depth data of the ROI extracted from the ROI extractor 20 into groups based on the depth data. That is, this ROI extracted previously in the first region and then depth corrected second region above is also divided by depth information and therefore its regions are divided by a depth condition which is different than the first depth condition cited above for the second region) , and detects a region including the third region and a region corresponding to the fourth region in the color image in the overlapping range as the detection target (Reference “person region extractor 40”, see Specification paragraph 007 4 where at last the person region extractor begins to extract the final person as shown in Figure 2H . Specifically, the pixels of the color image corresponding to the grouped depth data values previously corrected above are extracted by the person region extractor, see Specification paragraph 0073 . R ecall ing the goal was to extract a person who is the final detection target ) Jung also teaches the motivation to improve the ability to separate a person from space regardless of lighting (See Specification paragraph 0107). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Vit in view of Jung . Regarding Claim 6, Vit discloses The information processing device according to claim 5 , but fails to disclose wherein a distance from the at least one camera to a portion captured in a pixel corresponding to the fourth region satisfying the second depth condition is within a predetermined range that includes a representative value of a distance from the at least one camera to a portion captured in a pixel corresponding to the third region. Instead, Jung discloses wherein a distance from the at least one camera to a portion captured in a pixel corresponding to the fourth region satisfying the second depth condition is within a predetermined range that includes a representative value of a distance from the at least one camera to a portion captured in a pixel corresponding to the third region ( See Specification paragraph 0070 which describes the depth information corrector 30 further as using gaussian filtering which is performed on the depth image to correct the depth image. Gaussian filtering as performed on depth data which sets a predetermined cutoff range would read as a predetermined range that includes a representative value of a distance. Noting this 3D distance also is described with Reference “person region extractor 40 ”, see Specification paragraph 0072 where the 3D distances satisfying the depth condition are determined according to a K-means clustering. Also noting in paragraph 0073 where the person region extractor extracts pixels of the color image corresponding to the grouped depth data values grouped by the above K means clustering ) . Jung also teaches the motivation to improve the ability to separate a person from space regardless of lighting (See Specification paragraph 0107). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Vit in view of Jung . Regarding Claim 7, Vit discloses The information processing device according to Claim 6 , but fails to disclose wherein the at least one processor determines a width of the predetermined range based on a size of a region corresponding to the third region in the depth image. Instead Jung discloses wherein the at least one processor determines a width of the predetermined range based on a size of a region corresponding to the third region in the depth image ( Examiner’s Note, See rejection of Claim 7 under 112(b) where it is not entirely clear what about this predetermined range is predetermined if the width of it needs to be determined. Returning to Jung, s ee above in rejection of Claim 6 regarding the predetermined range being determined from the depth information. Note the K-means clustering as previously described in Specification paragraph 0072 which inherently determines cluster partitions or widths based on the distance to cluster centers ) . Regarding Claim 8, it is rejected for limitation s similar to those described in Claim 1 above albeit in the form of a “method executed by a computer” which is also disclosed by (Note“ three computers” in Section 3.3 Procedure where three different computers with cameras such as Kinect II are used to implement the procedure. A procedure would read as a method) . Claim 9 is rejected for containing similar limitations to those described in Claim 2 above. Claim 10 is rejected for containing similar limitations to those described in Claim 3 above. Claim 11 is rejected for containing similar limitations to those described in Claim 4 above. Claim 12 is rejected for containing similar limitations to those described in Claim 5 above. Claim 13 is rejected for containing similar limitations to those described in Claim 6 above. Claim 14 is rejected for containing similar limitations to those described in Claim 7 above. Regarding Claim 15 , it is also rejected for limitation s similar to those described in Claim 1 above albeit in the form of a “ A non-transitory computer-readable storage medium storing a program ” which is also disclosed by (Note“ three computers” and “SDK” in Section 3.3 Procedure where three different computers with cameras such as Kinect II are used to implement the procedure. An SDK or software development kit installed to these computers reads as a program ). Claim 16 is rejected for containing similar limitations to those described in Claim 2 above. Claim 17 is rejected for containing similar limitations to those described in Claim 3 above. Claim 18 is rejected for containing similar limitations to those described in Claim 4 above. Claim 19 is rejected for containing similar limitations to those described in Claim 5 above. Claim 20 is rejected for containing similar limitations to those described in Claim 6 above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT ALEXANDER JOHN RODGERS whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (703)756-1993 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT 5:30AM to 2:30PM ET . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at (571) 272-7319 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALEXANDER JOHN RODGERS/ Examiner, Art Unit 2661 /JOHN VILLECCO/ Supervisory Patent Examiner, Art Unit 2661