DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Typographic Conventions
Throughout this office action, shorthand notation for referencing locations of elements in documents are utilized. The following is a brief summary of the shorthand utilized:
Sec. – is used to denote an associated section with a header in non-patent literature
¶ – is used to denote the number and location of a paragraph
col. – is used to denote a column number
ln. – is used to denote a line; if a line number is not demarcated in a document, the line number will be assumed to start at 1 for each paragraph.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 04/24/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Drawings
The drawings are objected to because, as per 37 CFR 1.84(b)(1):
“(1) Black and white. Photographs, including photocopies of photographs, are not ordinarily permitted in utility and design patent applications. The Office will accept photographs in utility and design patent applications, however, if photographs are the only practicable medium for illustrating the claimed invention. For example, photographs or photomicrographs of: electrophoresis gels, blots (e.g., immunological, western, Southern, and northern), auto- radiographs, cell cultures (stained and unstained), histological tissue cross sections (stained and unstained), animals, plants, in vivo imaging, thin layer chromatography plates, crystalline structures, and, in a design patent application, ornamental effects, are acceptable. If the subject matter of the application admits of illustration by a drawing, the examiner may require a drawing in place of the photograph. The photographs must be of sufficient quality so that all details in the photographs are reproducible in the printed patent.
The drawings do not meet the requirements for photographs as stipulated above. In particular, Figures 1, 2, 6A, 6B, 7A, 7B, 8A, 8B appear to be photographs. Applicant is required to submit black and white line drawings.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Objections
Claim 15 is objected to because of the following informalities: Claim 15 is directed to a system, however, further in the preamble, claim 15 recites “said method comprising:”. The examiner believes this was intended to recite “said system comprising:”. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 11 & 12 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 11 recites “a plurality of supplemental light sources” and “a plurality of input images”. It is unclear whether these supplemental light sources or input images are directly the same as, or supplemental to, the first and second supplemental light sources, or the first, second, and third input images recited in claim 1, respectively, rendering claim 11 indefinite. For the purposes of compact prosecution, these elements will be interpreted as directly corresponding to the matching elements of claim 1.
Similarly, claim 12 recites “a plurality of supplemental light sources” and “a plurality of input images”. It is again unclear whether these supplemental light sources or input images are the same as, or supplemental to, the first and second supplemental light sources, or the first, second, and third input images recited in claim 1, respectively, rendering claim 11 indefinite. For the purposes of compact prosecution, these elements will be interpreted as directly corresponding to the matching elements of claim 1.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-8, 12, 14-23 are rejected under 35 U.S.C. 103 as being unpatentable over Raskar et al (US 2004/0183812) in view of Matsumoto; Shinya (US 2019/0170506).
Regarding claim 1, Raskar et al (hereinafter referred to as “Raskar”) disclose a method to reduce textures details (the examiner notes that here, a texture is being interpreted as a type of 2D feature) in an image to capture edges of a subject in an image. More specifically, Raskar teach A method for eliminating two-dimensional (2D) features from an image (method 200 of Fig. 2, which highlights identifying and de-emphasizing texture regions in steps 230 & 240 [¶0045-47]), said method comprising:
providing a workspace (the field of view of the digital camera 100 of Fig. 1A) with a single 2D sensor at a fixed location and pose (the digital camera 100 of Fig. 1A, which can capture a plurality of images of a subject in rapid succession from a fixed position and angle from the subject [¶0045-47]), and first and second supplemental light sources fixed at different locations from each other (flash units 101-104 of Fig. 1A with distinct locations [¶0038-39]);
providing, by the 2D sensor (digital camera 100 of Fig. 1A [¶0038]), a first input image of a subject under ambient lighting (step 301 of Fig. 3A, where an image is taken under ambient lighting [¶0059]), a second input image of the subject under the ambient lighting plus the first supplemental light source (step 302 of Fig. 3A, where an image is taken with a light source (flash unit 101 of Fig. 1A [¶0038-39]) [¶0060]), and a third input image of the subject under the ambient lighting plus the second supplemental light source (step 302 of Fig. 3A, where n number of images image are taken with any one of a plurality of light sources (flash units 101-104 of Fig. 1A with distinct locations, allowing up to five lighting conditions [¶0038-39 & 0060]); and
computing an output image having the 2D features eliminated (output stylized image 201 of Fig. 2, with reduced texture and edges enhanced [¶0045-47], with an exemplary image demonstrated Fig. 3B [¶0067]), on a computer having a processor (microprocessor 120 [¶0043; Fig. 1A]) and memory (memory 130 [¶0043; Fig. 1A]), by subtracting the first input image from the second input image to produce a first difference (step 310, where the ambient image 301 are subtracted from the n illuminated images 302 to produce difference images 303 [¶0061; Fig. 3A]), subtracting the first input image from the third input image to produce a second difference (step 310, where the ambient image 301 are subtracted from the n illuminated images 302 to produce difference images 303 [¶0061; Fig. 3A]), and while Raskar disclose calculating a ratio image, it does not teach dividing one difference by another.
Matsumoto; Shinya (hereinafter referred to as “Matsumoto”) however, is analogous art pertinent to the field of endeavor of the present application and discloses a system for projecting light to obtain dimensional positions of an object using a division image. More specifically, Matsumoto teach and dividing the first difference by the second difference (Matsumoto: step S311 of Figure 3, where a division image is generate, which can be configured to generate the division image by taking the difference of a first captured image (illuminated by a first light projection control unit 113 [¶0128; Fig. 2]) and a third captured image (under ambient lighting) and dividing it by the difference of a second captured image (illuminated by a second light projection control unit 115 [¶0128; Fig. 2]) and a third captured image [¶0133]). Additionally, Matsumoto discloses by subtracting ambient light from image and subsequently performing image division of the differences allows for the removal of the ambient light contribution and reflections from the surface of the object [¶0048].
Therefore, it would have been obvious before the effective filing date of the present application to implement the division image generation step outlined by Matsumoto with the texture reduction method provided by Raskar to arrive at the invention of the instant application.
Regarding claim 2, Raskar in view Matsumo teach The method according to Claim 1 (as described above) wherein the 2D sensor is a 2D camera (Raskar: digital camera 100 [¶0038]).
Regarding claim 3, Raskar in view Matsumo teach The method according to Claim 1 (as previously described) from which the 2D features are eliminated in the output image (Raskar: output stylized image 201 of Fig. 2, with reduced texture and edges enhanced [¶0045-47], with an exemplary image demonstrated Fig. 3B [¶0067]) wherein the subject has a flat surface (Raskar: the table illustrated in Fig. 3B).
Regarding claim 4, Raskar in view Matsumoto teach The method according to Claim 3 (as described above), wherein the 2D sensor is aimed either perpendicularly or (Examiner notes that the use of the disjunctive “or” necessitates mapping to only one of either recited limitation) at an oblique angle toward the flat surface (Raskar: the digital camera unit 100 of Fig. 1A can reasonably be adjusted aimed at either perpendicular or oblique angles towards the subject [¶0041; Fig. 1]).
Regarding claim 5, Raskar in view of Matsumoto teach The method according to Claim 3 (as described previously) wherein the supplemental light sources are aimed at oblique angles toward the flat surface (Raskar: the digital camera unit 100 of Fig. 1A can reasonably be adjusted aimed at an oblique angle towards the subject [¶0041; Fig. 1]).
Regarding claim 12, Raskar in view of Matsumoto The method according to Claim 1 (as described above) wherein the subject has a plurality of flat surfaces (Raskar: while the example subject of Fig. 3B has only one observable flat surface, one of ordinary skill would recognize that the methods 200 & 300 of Figs. 2 & 3 respectively would, in their normal and usual operation, would be applicable to a subject or subjects with a plurality of flat surfaces, MPEP § 2112.02), where a plurality of supplemental light sources are provided in the workspace (Raskar: flash units 101-104 [¶0038-39]), and where a plurality of input images are used to selectively remove the 2D features from each of the flat surfaces in a separate output image (Raskar: input images 110-114 are utilized to generate stylized images of method 200 [¶0045-50; Fig. 2]), and the separate output images are combined in a composite output image having the 2D features eliminated from each of the flat surfaces (Raskar: difference images 303 are combined with maximum images 304 to output silhouette pixels 306 of a composite stylized image [¶0061-66; Fig. 3A]).
Regarding claim 14, Raskar disclose a method to reduce textures details (the examiner notes that here, a texture is being interpreted as a type of 2D feature) in an image to capture edges of a subject in an image. More specifically, Raskar teach A method for eliminating two-dimensional (2D) features from an image (Raskar: method 200 of Fig. 2, which highlights identifying and de-emphasizing texture regions in steps 230 & 240 [¶0045-47]), said method comprising:
providing, by a 2D sensor (Raskar: digital camera 100 of Fig. 1A [¶0038]), a first input image of a subject under ambient lighting (step 301 of Fig. 3A, where an image is taken under ambient lighting [¶0059]), a second input image of the subject under the ambient lighting plus a first supplemental light source (Raskar: step 302 of Fig. 3A, where an image is taken with a light source (flash unit 101 of Fig. 1A [¶0038-39]) [¶0060]), and a third input image of the subject under the ambient lighting plus a second supplemental light source (Raskar: step 302 of Fig. 3A, where n number of images image are taken with any one of a plurality of light sources (flash units 101-104 of Fig. 1A with distinct locations, allowing up to five lighting conditions [¶0038-39 & 0060]); and
computing an output image having the 2D features eliminated (Raskar: output stylized image 201 of Fig. 2, with reduced texture and edges enhanced [¶0045-47], with an exemplary image demonstrated Fig. 3B [¶0067]), on a computer having a processor (microprocessor 120 [¶0043; Fig. 1A]) and memory (memory 130 [¶0043; Fig. 1A]), by subtracting the first input image from the second input image to produce a first difference (step 310, where the ambient image 301 are subtracted from the n illuminated images 302 to produce difference images 303 [¶0061; Fig. 3A]), subtracting the first input image from the third input image to produce a second difference (step 310, where the ambient image 301 are subtracted from the n illuminated images 302 to produce difference images 303 [¶0061; Fig. 3A]), and while Raskar disclose calculating a ratio image, it does not teach dividing one difference by another.
Matsumoto, however, is analogous art pertinent to the field of endeavor of the present application and discloses a system for projecting light to obtain dimensional positions of an object using a division image. More specifically, Matsumoto teach and dividing the first difference by the second difference. (Matsumoto: step S311 of Figure 3, where a division image is generate, which can be configured to generate the division image by taking the difference of a first captured image (illuminated by a first light projection control unit 113 [¶0128; Fig. 2]) and a third captured image (under ambient lighting) and dividing it by the difference of a second captured image (illuminated by a second light projection control unit 115 [¶0128; Fig. 2]) and a third captured image [¶0133]). Additionally, Matsumoto discloses by subtracting ambient light from image and subsequently performing image division of the differences allows for the removal of the ambient light contribution and reflections from the surface of the object [¶0048].
Therefore, it would have been obvious before the effective filing date of the present application to implement the division image generation step outlined by Matsumoto with the texture reduction method provided by Raskar to arrive at the invention of the instant application.
Regarding claim 15, Raskar disclose a method to reduce textures details (the examiner notes that here, a texture is being interpreted as a type of 2D feature) in an image to capture edges of a subject in an image. More specifically, Raskar teach A system for eliminating two-dimensional (2D) features from an image of a subject (Raskar: digital camera 100 of Fig. 1A, capable performing methods to render stylized images of a subject [¶0038]), said method comprising:
a 2D sensor (digital camera 100 of Fig. 1A [¶0038]) in a fixed position and pose aimed at the subject (Raskar: the digital camera 100 of Fig. 1A, which can capture a plurality of images of a subject in rapid succession from a fixed position and angle from the subject [¶0045-47]); first and second supplemental light sources in different fixed positions aimed at the subject (Raskar: flash units 101-104 of Fig. 1A with distinct locations [¶0038-39]); and
a computer having a processor (Raskar: microprocessor 120 [¶0043; Fig. 1A]) and memory (Raskar: memory 130 [¶0043; Fig. 1A]), said computer being in communication with the 2D sensor and configured to;
receive from the 2D sensor a first input image of the subject under ambient lighting (Raskar: step 301 of Fig. 3A, where an image is taken under ambient lighting [¶0059]), a second input image of the subject under the ambient lighting plus the first supplemental light source (Raskar: step 302 of Fig. 3A, where an image is taken with a light source (flash unit 101 of Fig. 1A [¶0038-39]) [¶0060]), and a third input image of the subject under the ambient lighting plus the second supplemental light source (Raskar: step 302 of Fig. 3A, where n number of images image are taken with any one of a plurality of light sources (flash units 101-104 of Fig. 1A with distinct locations, allowing up to five lighting conditions [¶0038-39 & 0060]), and
compute an output image having the 2D features eliminated (output stylized image 201 of Fig. 2, with reduced texture and edges enhanced [¶0045-47], with an exemplary image demonstrated Fig. 3B [¶0067]) by subtracting the first input image from the second input image to produce a first difference (Raskar: step 310, where the ambient image 301 are subtracted from the n illuminated images 302 to produce difference images 303 [¶0061; Fig. 3A]), subtracting the first input image from the third input image to produce a second difference (Raskar: step 310, where the ambient image 301 are subtracted from the n illuminated images 302 to produce difference images 303 [¶0061; Fig. 3A]), and while Raskar disclose calculating a ratio image, it does not teach dividing one difference by another.
Matsumoto, however, is analogous art pertinent to the field of endeavor of the present application and discloses a system for projecting light to obtain dimensional positions of an object using a division image. More specifically, Matsumoto teach and dividing the first difference by the second difference (Matsumoto: step S311 of Figure 3, where a division image is generate, which can be configured to generate the division image by taking the difference of a first captured image (illuminated by a first light projection control unit 113 [¶0128; Fig. 2]) and a third captured image (under ambient lighting) and dividing it by the difference of a second captured image (illuminated by a second light projection control unit 115 [¶0128; Fig. 2]) and a third captured image [¶0133]). Additionally, Matsumoto discloses by subtracting ambient light from image and subsequently performing image division of the differences allows for the removal of the ambient light contribution and reflections from the surface of the object [¶0048].
Therefore, it would have been obvious before the effective filing date of the present application to implement the division image generation step outlined by Matsumoto with the texture reduction method provided by Raskar to arrive at the invention of the instant application.
Regarding claim 16, Raskar in view of Matsumoto teach The system according to Claim 15 (as described above) wherein the 2D sensor is a 2D camera (Raskar: digital camera 100 [¶0038]).
Regarding claim 17, Raskar in view of Matsumoto teach The system according to Claim 15 (as described previously) wherein the subject has a flat surface from which the 2D features are eliminated in the output image (Raskar: output stylized image 201 of Fig. 2, with reduced texture and edges enhanced [¶0045-47], with an exemplary image demonstrated Fig. 3B, depicting a flat table surface [¶0067]).
Regarding claim 18, Raskar in view of Matsumoto teach The system according to Claim 17 wherein the 2D sensor is aimed either perpendicularly or (Examiner notes that the use of the disjunctive “or” necessitates mapping to only one of either recited limitation) at an oblique angle toward the flat surface (Raskar: the digital camera unit 100 of Fig. 1A can reasonably be adjusted aimed at either perpendicular or oblique angles towards the subject [¶0041; Fig. 1]).
Regarding claim 19, Raskar in view of Matsumoto teach The system according to Claim 17 wherein the supplemental light sources are aimed at oblique angles toward the flat surface (Raskar: the digital camera unit 100 of Fig. 1A can reasonably be adjusted aimed at an oblique angle towards the subject [¶0041; Fig. 1]).
Regarding claim 23, Raskar in view of Matsumoto teach The system according to Claim 15 wherein the computer controls the 2D sensor and the supplemental light sources (Raskar: the microprocessor 120 is configured to operate digital camera 100 to and light sources 101-114 [¶0038, 0045; Fig. 1A]) to automatically capture the first, second and third input images (Raskar: camera 100 is configured to take multiple images in rapid succession [¶0044]).
Claims 6-8 & 20-22 are rejected under 35 U.S.C. 103 as being unpatentable over Raskar et al (US 2004/0183812) in view of Matsumoto; Shinya (US 2019/0170506), further in view of Official Notice.
Regarding claim 6, Raskar in view of Matsumoto teach The method according to Claim 1 (as described previously) wherein subtracting the first input image from the second input image and subtracting the first input image from the third input image (Raskar: each difference image 303 produced in step 310 is generated by subtracting an ambient image 301 from n illuminated images 302 [¶0061; Fig. 3A]), and where dividing the first difference by the second difference includes dividing the pixel intensity value on a corresponding pixel-by- pixel basis (Matsumoto: see expression 4, wherein two differences between images (with respect to an ambient image) are divided by each other for each x,y pixel coordinate [¶0053]). Again, Matsumoto discloses that subtracting ambient light from image and subsequently performing image division of the differences allows for the removal of the ambient light contribution and reflections from the surface of the object [¶0048].
Therefore, it would have been obvious before the effective filing date of the present application to implement the division image generation step outlined by Matsumoto with the texture reduction method provided by Raskar to arrive at the invention of the instant application.
Raskar in view of Matsumoto, however, fails to explicitly disclose include subtracting a pixel intensity value on a corresponding pixel-by-pixel basis. Official Notice is taken as to the fact that image subtraction of pixel intensities on a pixel-by-pixel basis is common in the art. One of ordinary skill in the art would recognize that performing image subtraction on a pixel-by-pixel basis is a fundamental process in image processing and results in a correlation of pixels to one another. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention perform the image subtraction outlined by Raskar on a pixel-by-pixel basis.
Regarding claim 7, Raskar in view of Matsumoto, further in view of Official Notice teach The method according to Claim 6 (as described above) wherein computing the output image includes computing a first intermediate image by subtracting a portion or an entirety of the first input image from a corresponding portion or (Examiner notes that the use of the disjunctive “or” to only one of either recited limitation) entirety of the second input image (Raskar: step 310, where the ambient image 301 are subtracted from the n illuminated images 302 to produce the intermediate difference images 303 [¶0061; Fig. 3A]), and computing a second intermediate image by subtracting the portion or the entirety of the first input image from a corresponding portion or entirety of the third input image (Raskar: step 310, where the ambient image 301 are subtracted from the n illuminated images 302 to produce difference images 303 [¶0061; Fig. 3A]), then computing the output image by dividing the first intermediate image by the second intermediate image (Raskar: in step 330, ratio images are generated from intermediate difference images 303 divided by a maximum image 304 to output ratio images 305 [¶0063; Fig. 3]).
Regarding claim 8, Raskar in view of Matsumoto, further in view of Official Notice teach The method according to Claim 6 (as described above) wherein computing the output image includes computing the first and second differences (Raskar: step 310, where the ambient image 301 are subtracted from the n illuminated images 302 to produce n difference images 303 [¶0061; Fig. 3A]) and dividing the first difference by the second difference for each pixel of the output image (Mastumoto: see expression 4, wherein two differences between images (with respect to an ambient image) are divided by each other for each x,y pixel coordinate [¶0053]). Again, Matsumoto discloses that subtracting ambient light from image and subsequently performing image division of the differences allows for the removal of the ambient light contribution and reflections from the surface of the object [¶0048].
Therefore, it would have been obvious before the effective filing date of the present application to implement the division image generation step outlined by Matsumoto with the texture reduction method provided by Raskar to arrive at the invention of the instant application.
Regarding claim 20, Raskar in view of Matsumoto teach The system according to Claim 15 (as described previously) wherein subtracting the first input image from the second input image and subtracting the first input image from the third input image (Raskar: each difference image 303 produced in step 310 is generated by subtracting an ambient image 301 from n illuminated images 302 [¶0061; Fig. 3A]), and where dividing the first difference by the second difference includes dividing the pixel intensity value on a corresponding pixel-by-pixel basis (Matsumoto: see expression 4, wherein two differences between images (with respect to an ambient image) are divided by each other for each x,y pixel coordinate [¶0053]). Again, Matsumoto discloses that subtracting ambient light from image and subsequently performing image division of the differences allows for the removal of the ambient light contribution and reflections from the surface of the object [¶0048].
Therefore, it would have been obvious before the effective filing date of the present application to implement the division image generation step outlined by Matsumoto with the texture reduction method provided by Raskar to arrive at the invention of the instant application.
Raskar in view of Matsumoto, however, fails to explicitly disclose include subtracting a pixel intensity value on a corresponding pixel-by-pixel basis. Official Notice is taken as to the fact that image subtraction of pixel intensities on a pixel-by-pixel basis is common in the art. One of ordinary skill in the art would recognize that performing image subtraction on a pixel-by-pixel basis is a fundamental process in image processing and would result in correlation of pixels to one another. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention perform the image subtraction outlined by Raskar on a pixel-by-pixel basis.
Regarding claim 21, Raskar in view of Matsumoto, further in view of Official Notice teach The system according to Claim 20 (as described above) wherein computing the output image includes computing a first intermediate image by subtracting a portion or (Examiner notes that the use of the disjunctive “or” to only one of either recited limitation) an entirety of the first input image from a corresponding portion or entirety of the second input image (Raskar: step 310, where the ambient image 301 are subtracted from the n illuminated images 302 to produce the intermediate difference images 303 [¶0061; Fig. 3A]), and computing a second intermediate image by subtracting the portion or the entirety of the first input image in from a corresponding portion or entirety of the third input image (Raskar: step 310, where the ambient image 301 are subtracted from the n illuminated images 302 to produce difference images 303 [¶0061; Fig. 3A]), then computing the output image by dividing the first intermediate image by the second intermediate image (Raskar: in step 330, ratio images are generated from intermediate difference images 303 divided by a maximum image 304 to output ratio images 305 [¶0063; Fig. 3]).
Regarding claim 22, Raskar in view of Matsumoto, further in view of Official Notice teach The system according to Claim 20 (as described above) wherein computing the output image includes computing the first and second differences (Raskar: step 310, where the ambient image 301 are subtracted from the n illuminated images 302 to produce n difference images 303 [¶0061; Fig. 3A]) and dividing the first difference by the second difference for each pixel of the output image (Mastumoto: see expression 4, wherein two differences between images (with respect to an ambient image) are divided by each other for each x,y pixel coordinate [¶0053]). Again, Matsumoto discloses that subtracting ambient light from image and subsequently performing image division of the differences allows for the removal of the ambient light contribution and reflections from the surface of the object [¶0048].
Therefore, it would have been obvious before the effective filing date of the present application to implement the division image generation step outlined by Matsumoto with the texture reduction method provided by Raskar to arrive at the invention of the instant application.
Claims 9, 10 & 24 are rejected under 35 U.S.C. 103 as being unpatentable over Raskar et al (US 2004/0183812) in view of Matsumoto; Shinya (US 2019/0170506), further in view of Diankov et al (US 2020/0148489 A1).
Regarding claim 9, Raskar in view of Matsumoto teach The method according to Claim 1 (as described previously) but does not teach applying this method towards dimensioning boxes arranged on a pallet.
Diankov et al (hereinafter referred to as “Diankov”), on the other hand, is analogous art pertinent to the field of endeavor of the present application and disclose a shipping logistics system for dimensioning packages in a container. More specifically, Diankov teach wherein the subject is a plurality of boxes arranged on a pallet (Diankov: a plurality of packages 20 in the shape of boxes are arranged in a container 22 [¶0074; Fig. 3] – the examiner notes that, in the context of Diankov, one of ordinary skill in the art would recognize that packages are often arranged on pallets in bulk shipping), and further comprising using the output image in a box segmentation computation (Diankov: the composite map generating section 630 generates composite data of the state of the shipping container, which is utilized by the unloading operation determining section 640 to estimate the size of packages [¶0078-80 & 0110; Fig. 6]), where edges of the boxes are identified in the output image (Diankov: the unloading operation determining section 640 estimates the length of packages edges [¶0110]), and sizes and shapes of individual boxes are determined from the edges (Diankov: the unloading operation determining section 640 calculates the sizes of a plurality of packages [¶0110], with Figs. 7-9 demonstrating point cloud data used for segmentation and dimensioning [¶0123-124]). Diankov further states that even when a portion of a package is unable to scanned by their sensor, the size of the package can be narrowed down based on neighboring package sizes to estimate a packages size [¶0111], which adds a benefit when searching for a package of a particular size or palletizing or depalletizing a shipment of packages in a particular order.
Therefore, it would have been obvious before the effective filing date of the present application to take package segmentation methods disclosed by Diankov and incorporate them with the feature-elimination method of Raskar in view of Matsumoto to arrive at the invention of the instant application.
Regarding claim 10, Raskar in view of Matsumoto teach The method according to Claim 1 (as described previously), but does not teach applying this method towards dimensioning boxes arranged on a pallet.
Diankov, on the other hand, teach wherein the subject is a plurality of flat packages arranged on a surface (Diankov: a plurality of packages 20 in the shape of boxes are arranged in a container 22 [¶0074; Fig. 3]), and further comprising using the output image in a package finding computation (Diankov: the composite map generating section 630 generates composite data of the state of the shipping container, which is utilized by the unloading operation determining section 640 to estimate the size of packages [¶0078-80 & 0110; Fig. 6]), where edges of the packages are identified in the output image (Diankov: the unloading operation determining section 640 estimates the length of packages edges [¶0110]), and sizes and shapes of individual packages are determined from the edges (Diankov: the unloading operation determining section 640 calculates the sizes of a plurality of packages [¶0110], with Figs. 7-9 demonstrating point cloud data used for segmentation and dimensioning [¶0123-124]). Diankov further states that even when a portion of a package is unable to scanned by their sensor, the size of the package can be narrowed down based on neighboring package sizes to estimate a packages size [¶0111], which adds a benefit when searching for a package of a particular size or palletizing or depalletizing a shipment of packages in a particular order.
Therefore, it would have been obvious before the effective filing date of the present application to take package segmentation methods disclosed by Diankov and incorporate them with the feature-elimination method of Raskar in view of Matsumoto to arrive at the invention of the instant application.
Regarding claim 24, Raskar in view of Matsumoto teach The system according to Claim 15 (as described previously) but does not teach applying this method towards dimensioning boxes arranged on a pallet.
Diankov, on the other hand, teach wherein the subject is a plurality of boxes arranged on a pallet (Diankov: a plurality of packages 20 in the shape of boxes are arranged in a container 22 [¶0074; Fig. 3] – the examiner notes that, in the context of Diankov, one of ordinary skill in the art would recognize that packages are often arranged on pallets in bulk shipping), and the output image is used in a box segmentation computation, by the computer or by a different computer (Diankov: the composite map generating section 630 generates composite data of the state of the shipping container, which is utilized by the unloading operation determining section 640 to estimate the size of packages [¶0078-80 & 0110; Fig. 6], the logistics management system 100 performing these operations may be realized via hardware, like a computer [¶0033; Fig. 1]), where edges of the boxes are identified in the output image (Diankov: the unloading operation determining section 640 estimates the length of packages edges [¶0110]), and sizes and shapes of individual boxes are determined from the edges (Diankov: the unloading operation determining section 640 calculates the sizes of a plurality of packages [¶0110], with Figs. 7-9 demonstrating point cloud data used for segmentation and dimensioning [¶0123-124]). Diankov further states that even when a portion of a package is unable to scanned by their sensor, the size of the package can be narrowed down based on neighboring package sizes to estimate a packages size [¶0111], which adds a benefit when searching for a package of a particular size or palletizing or depalletizing a shipment of packages in a particular order.
Therefore, it would have been obvious before the effective filing date of the present application to take package segmentation methods disclosed by Diankov and incorporate them with the feature-elimination method of Raskar in view of Matsumoto to arrive at the invention of the instant application.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Raskar et al (US 2004/0183812) in view of Matsumoto; Shinya (US 2019/0170506), further in view of Aubert et al (US 2016/0180201 B1).
Regarding claim 11, Raskar in view of Matsumoto teach The method according to Claim 1 (as previously described) wherein the subject has curved surfaces (Raskar: the subject 390 of Fig. 3B (a boquet of flowers) has a plurality of curved surfaces [¶0067]) from which the 2D features are removed in the output image (Raskar: the subject 390 of Fig. 3B having textures removed resulting in a stylized image [¶0067]), where a plurality of supplemental light sources are provided in the workspace (Raskar: light sources 101-104 are provided in the field of view of the digital camera 100 of Fig. 1A [[¶0038-39]), but does not disclose selectively removing 2D features from localized portions of the image.
Aubert et al (hereinafter referred to as “Aubert”), however, disclose an image processing method for removing shadows from a portion of an image. More specifically, Aubert et al teach and a plurality of input images are used to selectively remove the 2D features from localized portions of the output image (Aubert: step S4.4 of the method outlined in Fig. 4, wherein a shadow is removed from an object, where identified shadow portions [elements 18a & 18b of Fig. 5] are removed from an image or a set of images [¶0026-29 &0031-32]). Furthermore, Aubert discloses that the process of removing shadows from portions of an image are critical to accurate identification of objects in the field of view [¶0002-3].
Therefore, it would have been obvious before the effective filing date of the present application to implement the removal of shadows from portions of an image disclosed by Aubert to the 2D feature elimination method disclosed by Raskar in view of Matsumoto to further improve object identification and yield an improved product to arrive at the invention of the instant application.
Allowable Subject Matter
Claim 13 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The prior art neither anticipate nor render obvious the invention as presented in claim 13. The closest prior art may individually teach removing certain 2D features from an image – Finlayson et al (US 8811729 B2) disclose a method for eliminating color, Bala; Raja (US 2022/0189088 A1) discloses a method for eliminating text, and Ishizaka; Shugo (US 2014/0270573 A1), disclose a method for eliminating graphics. However, the closest prior art does not explicitly teach eliminating tape as a 2D feature, nor do they teach toward the context of the invention as whole.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Colburn et al (US 10484617 B1) disclose a system for omitting specular reflections of objects under illuminated conditions.
Camus et al (US 6021210 A) disclose method of image subtraction to remove ambient illumination.
Sones et al (WO 2010132162 A2) disclose a system and method for dimensioning objects using stereoscopic imaging.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael M. Sofroniou whose telephone number is (571)272-0287. The examiner can normally be reached M-F: 8:30 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John M. Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL M SOFRONIOU/Examiner, Art Unit 2661
/JOHN VILLECCO/Supervisory Patent Examiner, Art Unit 2661