DETAILED ACTION
Response to Arguments
Applicant’s arguments, see application, filed 01/26/2026, with respect to the 112 rejection has been fully considered and are persuasive. The 112 rejection has been withdrawn.
Applicant’s arguments with respect to claims 11-20, 44-49, 58-60 and 62 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/26/2026 has been entered.
Information Disclosure Statement
The information disclosure statement (IDS) was submitted on 01/26/2026. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Examiner’s Note
In regards to the following claims, “contemporaneously” is defined by the specification (published para. 0042) to mean “the image data 12 and the image data 14 are acquired within 1,000 milliseconds (ms), within 100 ms, within 50 ms, within 25 ms, within 10 ms, within 5 ms, within 1 ms, or within 0.1 ms, of each other.” and defined by the specification (published para. 0053) to mean “the natural and/or artificial light 26 and the excitation light 28 each illuminate the scene 18 within 1,000 milliseconds (ms), within 100 ms, within 50 ms, within 25 ms, within 10 ms, within 5 ms, within 1 ms, or within 0.1 ms, of each other.”
In regards to the following claims, substantially is defined by the specification (published para. 0060) as utilized herein with regard to line of sight means that the line of sights for each of the first and second cameras 34A, 34B are within 10°, within 5°, within 4°, within 3, within 2°, within 10, or within 0.10, of each other. The term “substantially” as utilized herein with regard to zoom factor means that the zoom factors for each of the first and second cameras 34A, 34B are within 10%, within 5%, within 4%, within 3%, within 2%, within 1%, or within 0.1%, of each other.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 11-15, 17-19, 44-49 are rejected under 35 U.S.C. 103 as being unpatentable over Perlman (US 20100231692) in view of Li et al. (herein after will be referred to as Li) (US Patent No. 11,860,098) and in further view of Cardin et al. (herein after will be referred to as Cardin) (US 20170287140).
Regarding claim 11, Perlman discloses a method of acquiring image data of an object, comprising:
contemporaneously illuminating the object with (a) natural and/or artificial light, and (b) excitation light; and capturing image data using an image acquisition setup that generates color data representing a visible portion of the light spectrum of the object and that generates gray scale data representing an invisible portion of the light spectrum of the object; [See Perlman [0089] Illuminated by visible light for a Visible Light Lit image and illuminated by UVA light for a grayscale Image….Also, in 0089, (whether alone or combined with visible light). Also, see 0102, transparent UV makeup which emits in the ultraviolet spectrum. Also, see Figs. 30a-b and 31, where fig. 31 show a perfect scenario of where the visible light panel and UV light panels switch output simultaneously. However, that is not the case in a ideal scenario as there will be a timing interval. For example, Fig. 14 shows a timing interval for switching the light panel from on/off and para. 0131 states this would be on the order of microseconds.]
wherein the object comprises a fluorescent dye that emits fluorescence at a wavelength in the invisible portion of the light spectrum upon illumination with the excitation light; and [See Perlman [0116] rather than using phosphorescent paint or dye, as described above, transparent UV- or transparent IR-emissive paint, ink or dye is used on clothing, props or other objects in the scene. Phosphor with the same properties as those previously described with makeup is used, and the same lighting, camera, filtering and other capture and processing techniques are used. Also, see 0102, transparent UV makeup which emits in the ultraviolet spectrum.]
wherein the excitation light and the fluorescence are in the invisible portion of the light spectrum. [See Perlman [0089] Illuminated by UVA light. Also, see 0116, transparent UV paint, ink or dye.]
wherein the image acquisition setup is configured to contemporaneously acquire the color data and the gray scale data [See Perlman [Fig. 2A] Color cameras and Grayscale cameras. Also, see [Fig. 14] There is a timing interval (1442) between capturing images between Visible cameras and Grayscale Cameras. Also, see 0131, on the order of microseconds. This would be applied to the Figs. 30a-b.]
Perlman does not explicitly disclose
and the gray scale data represent the same scene at the same time,
generating in real time an alpha channel from the grayscale data and outputting a recorded alpha channel.
However, Li does disclose
wherein the image acquisition setup is configured to contemporaneously acquire the color data and the gray scale data and to produce video streams having the same time code for contemporaneously acquired frames such that the color data and the gray scale data represent the same scene at the same time, but in different color space representations; and [See Li [Col. 8 lines 23-37] Delay between two cameras on the order of milliseconds. A correspondence relationship is established based on the capturing timestamps. Also, see Col. 7 lines 35-50.]
It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the method by Perlman to add the teachings of Li, in order to assign the same time code to images acquired within a short interval (which applicant further states in the Remarks dated 08/27/2025 Pg. 9 is a standard in synchronized video acquisition for practical imaging systems).
Perlman (modified by Li) do not explicitly disclose
generating in real time an alpha channel from the grayscale data and outputting a recorded alpha channel.
However, Cardin does disclose
generating in real time an alpha channel from the grayscale data and outputting a recorded alpha channel. [See Cardin [0013] Generates a live stream of images containing an alpha channel. Also, see 0075 and Fig. 3, uses IL (invisible light) image data to compute the alpha channel in real time. Also, see 0014, the invisible light is in UV spectrum.]
It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the method by Perlman (modified by Li) to add the teachings of Cardin, in order to achieve high quality foreground segmentation using an active background [See Cardin [0001]].
Regarding claim 12, Perlman (modified by Li and Cardin) disclose the method of claim 11. Furthermore, Perlman discloses
wherein the visible portion has wavelengths in the range of 400-700 nanometers (nm). [See Perlman [0089] Illuminated by visible light for a Visible Light Lit image. The wavelengths described above are inherent for a visible image.]
Regarding claim 13, Perlman (modified by Li and Cardin) disclose the method of claim 11. Furthermore, Perlman discloses
wherein the invisible portion has wavelengths of less than 400 nm. [See Perlman [0104] Describes using a transparent IR makeup with IR passing filters and only capture the emitted IR light from the transparent IR makeup. Also, see [See Perlman [0116] rather than using phosphorescent paint or dye, as described above, transparent UV- or transparent IR-emissive paint, ink or dye is used on clothing, props or other objects in the scene. Phosphor with the same properties as those previously described with makeup is used, and the same lighting, camera, filtering and other capture and processing techniques are used. Therefore, this shows using a transparent UV makeup with UV passing filters and only capture the emitted UV light from the transparent UV makeup. It is inherent that ultraviolet comprises the wavelengths as claimed above.]
Regarding claim 14, Perlman (modified by Li and Cardin) disclose the method of claim 11. Furthermore, Perlman discloses
wherein the fluorescent dye comprises fluorophores, fluorescent energy transfer dyes, fluorescent pigments, fluorescent polymers, fluorescent proteins, or combinations thereof. [See Perlman [0128] Phosphors used as pigments.]
Regarding claim 15, Perlman (modified by Li and Cardin) disclose the method of claim 11. Furthermore, Perlman discloses
wherein the wavelength range of the excitation light is different than the wavelength range of the fluorescence light emitted by the fluorescent dye. [See Perlman [Fig. 30a-b]]
Regarding claim 17, Perlman (modified by Li and Cardin) disclose the method of claim 11. Furthermore, Perlman discloses
wherein the image acquisition setup comprises at least one camera configured to acquire the image data of the object representing the visible portion and/or the image data of the object representing the invisible portion. [See Perlman [Fig. 2A] Color cameras and Grayscale cameras.]
Regarding claim 18, Perlman (modified by Li and Cardin) disclose the method of claim 17. Furthermore, Perlman discloses
wherein the at least one camera comprises one or more image sensors configured to generate the color data, the gray scale data, or a combination thereof. [See Perlman [Fig. 2A] Color cameras and Grayscale cameras.]
Regarding claim 19, Perlman (modified by Li and Cardin) disclose the method of claim 18. Furthermore, Perlman discloses
wherein the one or more image sensors comprise a red/green/blue (RGB) sensor and an ultraviolet (UV) sensor or an infrared (IR) sensor. [See Perlman [Fig. 2A] Color cameras and Grayscale cameras.]
See examiners rejection for claim 11 which is analogous and applicable for the rejection of claim 44. Furthermore, Perlman discloses
a method of acquiring image data of an object in front of a background, comprising: [See Perlman [0080] Separate an object from its background.]
providing an image acquisition setup configured to contemporaneously acquire image data of the object representing a visible portion of the light spectrum and image data of the background representing an invisible portion of the light spectrum; [See Perlman [Fig. 14] There is a timing interval (1442) between capturing images between Visible cameras and Grayscale Cameras. Also, see 0131, on the order of microseconds. This would be applied to the Figs. 30a-b.]
Regarding claim 45, see examiners rejection for claims 12 and 13 which are analogous for the rejection of claim 45.
Regarding claim 46, Perlman (modified by Li and Cardin) disclose the method of claim 44. Furthermore, Perlman discloses
wherein the image acquisition setup comprises first and second sensors, wherein the first sensor acquires image data of the object representing a visible portion of the light spectrum, and wherein the second sensor acquires image data of the background representing an invisible portion of the light spectrum. [See Perlman [0089] Illuminated by visible light for a Visible Light Lit image and illuminated by UVA light for a grayscale Image….Also, see Figs. 30a-b and 31, for the different cameras.]
Regarding claim 47, Perlman (modified by Li and Cardin) disclose the method of claim 44. Furthermore, Perlman discloses
wherein the background comprises a flat surface that is illuminated using a light source that is remotely positioned relative to the flat surface. [See Perlman [0080] the meteorologist can be shown from a straight-on shot, or from an side angle shot, but still composited in front of the weather map (i.e. the weather map is presented on a screen that is not a green screen or blue screen). Also, see 0101, blue screens may be utilized in the background. Also, see Figs. 30a-b, light panels.]
Regarding claim 48, Perlman (modified by Li and Cardin) disclose the method of claim 44. Furthermore, Perlman discloses
wherein the background comprises a flat surface that is illuminated using a light source that is coupled to the flat surface. [See Perlman [0080] the meteorologist can be shown from a straight-on shot, or from an side angle shot, but still composited in front of the weather map (i.e. the weather map is presented on a screen that is not a green screen or blue screen).
Regarding claim 49, Perlman (modified by Li and Cardin) disclose the method of claim 44. Furthermore, Perlman does not explicitly disclose
wherein the background comprises a video screen, and wherein the video screen comprises a plurality of UV LEDs that illuminate the background.
However, Cardin does disclose
wherein the background comprises a video screen, and wherein the video screen comprises a plurality of UV LEDs that illuminate the background. [See Cardin [0069] Active background illuminates in the UV spectrum. Also, see Fig. 1, Display/Screen.]
Applying the same motivation as applied in claim 11.
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Perlman (US 20100231692) in view of Li (US Patent No. 11,860,098) in view of Cardin (US 20170287140) and in further view of Liu et al. (herein after will be referred to as Liu) (US 20220268701).
Regarding claim 16, Perlman (modified by Li and Cardin) disclose the method of claim 15. Furthermore, Perlman discloses
wherein the fluorescent dye is excited by the excitation light at a wavelength of 360 nm and [See Perlman [0129] Many phosphors that phosphoresce or fluoresce in visible light spectra are charged more efficiently by ultraviolet light than by visible light….with its peak at around 360nm.]
Perlman does not explicitly disclose
emits the fluorescence light at a wavelength of 381 nm.
However, Liu does disclose
emits the fluorescence light at a wavelength of 381 nm. [See Liu [0075] Fluorescent proper is excitable with 350 to 700nm (i.e. includes the 381nm).]
It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the method by Perlman (modified by Li and Carden) to add the teachings of Liu, in order to capture the excitation wavelength based upon the fluorescent dye being utilized.
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Perlman (US 20100231692) in view of Li (US Patent No. 11,860,098) in view of Cardin (US 20170287140) and in further view of Adachi et al. (herein after will be referred to as Adachi) (US 20220061683).
Regarding claim 20, Perlman (modified by Li and Cardin) disclose the method of claim 11. Furthermore, Perlman discloses
wherein the image acquisition setup further comprises an auxiliary camera configured to track a portion of the object, and [See Perlman [0096] Cameras focused for visible light…The images of the surfaces that do not have makeup on them (e.g. eyes and teeth)….tracking the eye position or teeth position. Also, see Fig. 1, Figs. 30a-b, a second color camera used for tracking.]
Perlman does not explicitly disclose
wherein the excitation light does not illuminate the portion of the subject.
However, Adachi does disclose
wherein the excitation light does not illuminate the portion of the subject. [See Adachi [0030-0031] Face tracking without utilizing light source.]
It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the method by Perlman (modified by Li and Cardin) to add the teachings of Adachi, in order to not apply illumination to certain parts of the object such that the image quality is improved.
Claims 58-60 and 62 are rejected under 35 U.S.C. 103 as being unpatentable over Perlman (US 20100231692) in view of Li (US Patent No. 11,860,098) in view of Ramirez et al. (herein after will be referred to as Ramirez) (US 20190327394) and in further view of Cardin (US 20170287140).
Regarding claim 58, Perlman discloses
an image acquisition system to capture image data of an object in a scene, wherein the object is in front of a background, comprising: [See Perlman [0080] Separate an object from its background.]
a first camera having a first image sensor that is configured to generate color data representing a visible portion of the light spectrum of the object; [See Perlman [Figs. 30a-b] Color cameras.]
a second camera having a second image sensor configured to generate gray scale data representing an invisible portion of the light spectrum of the background; [See Perlman [Figs. 30a-b] Grayscale cameras.]
a filter coupled to the second camera that permits travel of light in the invisible portion of the light spectrum to the second image sensor and that reduces or blocks travel of light in the visible portion of the light spectrum to the second image sensor; [See Perlman [0102] Cameras sensitive to UV light are used with filters that block visible and IR light.]
a light source configured to continuously illuminate the background with the light in the invisible portion of the light spectrum. [See Perlman [Figs. 30a-b] UV light sources.]
wherein the image acquisition setup is configured to contemporaneously acquire the color data and the gray scale data [See Perlman [Fig. 2A] Color cameras and Grayscale cameras. Also, see [Fig. 14] There is a timing interval (1442) between capturing images between Visible cameras and Grayscale Cameras. Also, see 0131, on the order of microseconds. This would be applied to the Figs. 30a-b.]
Perlman does not explicitly disclose
wherein first and second cameras are coupled to a carrier and configured to capture the object in the scene along substantially the same line of sight and zoom factor; and
a computing device configured to generate in real time an alpha channel from the grayscale data and to output a recorded alpha channel.
However, Li does disclose
wherein the image acquisition setup is configured to contemporaneously acquire the color data and the gray scale data and to produce video streams having the same time code for contemporaneously acquired frames such that the color data and the gray scale data represent the same scene at the same time, but in different color space representations. [See Li [Col. 8 lines 23-37] Delay between two cameras on the order of milliseconds. A correspondence relationship is established based on the capturing timestamps. Also, see Col. 7 lines 35-50.]
It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the method by Perlman to add the teachings of Li, in order to assign the same time code to images acquired within a short interval (which applicant further states in the Remarks dated 08/27/2025 Pg. 9 is a standard in synchronized video acquisition for practical imaging systems).
Perlman (modified by Li) do not explicitly disclose
wherein first and second cameras are coupled to a carrier and configured to capture the object in the scene along substantially the same line of sight and zoom factor; and
a computing device configured to generate in real time an alpha channel from the grayscale data and to output a recorded alpha channel.
However, Ramirez does disclose
wherein first and second cameras are coupled to a carrier and configured to capture the object in the scene along substantially the same line of sight and zoom factor; and [See Ramirez [Fig. 8 and 0021] Stereoscopic camera housing. Also, see Fig. 23, Identical Line of sight. Also, see 0120, ensure left/right magnification is substantially the same.]
It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the method by Perlman (modified by Li) to add the teachings of Ramirez, in order to perform a simple substitution of the camera arrangement (i.e. contained within a camera housing/frame/rig).
Perlman (modified by Li and Ramirez) do not explicitly disclose
a computing device configured to generate in real time an alpha channel from the grayscale data and to output a recorded alpha channel.
However, Cardin does disclose
a computing device configured to generate in real time an alpha channel from the grayscale data and to output a recorded alpha channel. [See Cardin [0013] Generates a live stream of images containing an alpha channel. Also, see 0075 and Fig. 3, uses IL (invisible light) image data to compute the alpha channel in real time. Also, see 0014, the invisible light is in UV spectrum.]
It would have been obvious to the person of ordinary skill in the art at the time of the effective filing date to modify the method by Perlman (modified by Li and Ramirez) to add the teachings of Cardin, in order to achieve high quality foreground segmentation using an active background [See Cardin [0001]].
Regarding claim 59, Perlman (modified by Li, Ramirez and Cardin) disclose the system of claim 58. Furthermore, Perlman does not explicitly disclose
wherein the carrier comprises a stereoscopic camera carrier.
However, Ramirez does disclose
wherein the carrier comprises a stereoscopic camera carrier. [See Ramirez [Fig. 8 and 0021] Stereoscopic camera housing.]
Applying the same motivation as applied in claim 58.
Regarding claim 60, Perlman (modified by Li, Ramirez and Cardin) disclose the system of claim 58. Furthermore, Perlman does not explicitly disclose
wherein the carrier is configured to coordinate simultaneous lens focusing and/or zoom for the first and second cameras.
However, Ramirez does disclose
wherein the carrier is configured to coordinate simultaneous lens focusing and/or zoom for the first and second cameras. [See Ramirez [0120] left and right zoom lenses may be fixed to a common carrier to ensure left and right magnification is substantially the same.]
Applying the same motivation as applied in claim 58.
Regarding claim 62, Perlman (modified by Li, Ramirez and Cardin) disclose the system of claim 58. Furthermore, Perlman does not explicitly disclose
wherein the first and second cameras are configured to operate synchronously to produce video streams having the same time code for contemporaneously acquired frames.
However, Li does disclose
wherein the first and second cameras are configured to operate synchronously to produce video streams having the same time code for contemporaneously acquired frames. [See Li [Col. 8 lines 23-37] Delay between two cameras on the order of milliseconds. A correspondence relationship is established based on the capturing timestamps. Also, see Col. 7 lines 35-50.]
Applying the same motivation as applied in claim 58.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES T BOYLAN whose telephone number is (571)272-8242. The examiner can normally be reached Monday-Friday 7am-3pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JAMIE ATALA can be reached at 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES T BOYLAN/Examiner, Art Unit 2486