Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Response to Amendments and Arguments
Amendments and arguments filed on 12/29/2025 have been fully considered and are not found to place the application in a condition for allowance.
While Wilson and Cote do not specifically teach determining a direction of the touch input based on a shape of an area in a plane of the image, prior art is found to teach such a limitation. Accordingly, such a limitation is not found to place the application in a condition for allowance.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 13, and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Wilson, US 2005/0226505 A1, hereinafter “Wilson”, in view of Cote et al., US 2011/0090380 A1, hereinafter “Cote”, and further in view of Park et al., US 2012/0019562 A1, hereinafter “Park”.
Regarding claim 13, Wilson teaches a non-transitory computer-readable medium storing computer instructions that, when executed by at least one processor of a projector, the at least one processor comprising processing circuitry (¶ 28-31), cause the at least one projector, individually and/or collectively to perform operations comprising: projecting an image (¶ 47); outputting an infrared ray to a projection area where the image is projected (¶ 34) using an infrared emitter (fig. 2, element 66, ¶ 34); generating an image by photographing the projection area (¶ 39: “image of IR light reflected from objects”) using an infrared camera module including an infrared camera (fig. 2, element 68, ¶ 39); and correcting brightness of an image obtained through the infrared camera module using correction data (¶ 65-66, see I*(x,y) for example); and wherein the correction data is generated based on first correction data; wherein the first correction data includes correction values for correcting a decrease in intensity of the infrared ray according to a distance from the infrared emitter (¶ 65), wherein the correction values included in the first correction data are determined based on an intensity of the infrared ray measured in a plurality of areas in the projection area (¶ 65, see “per pixel basis”), and wherein the correction values of the correction data are based on the correction values of the first correction data (¶ 65-68).
Wilson does not specifically teach that the correction data is further generated based on second correction data, wherein the second correction data includes correction values for correcting lens shading; wherein the correction data are based on the correction values of first and second correction data.
Cote, however, teaches that correction data is generated based on correction values for correcting lens shading (¶ 144).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine the teachings of Wilson in view of Cote. The references teach improving the uniformity of an imaged area. Wilson teaches that light intensity non-uniformity occurs due to variations in a distance from infrared emitters (see Wilson ¶ 65). Cote further teaches that non-uniformity may further occur due to lens shading of a camera module similar to that of Wilson. Cote further teaches compensating for intensity drop-offs due to such lens shading. As such, one would have been motivated to combine the teachings of Wilson in view of Cote in order to compensate for intensity drop-offs based on both the distance from infrared emitters and the lens shading effect, thereby improving detection of inputs provided by a user.
Wilson and Cote do not teach using the correction values to identify a direction of the touch input based on a shape of an area in a plane of the image.
Park, however, teaches identifying a direction of the touch input based on a shape of an area in a plane of the image (¶ 147).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine the teachings of Wilson, Cote, and Park. Wilson and Cote teach correcting touch input values. Wilson further teaches determining a shape of the input object (¶ 64) and Park further teaches using such touch input values to determine the direction of a touch input based on the detected shape of the touch input. One would have been motivated to make such a combination because Park clearly teaches that such direction information may be utilized to provide further input capabilities to a user, thus improving the touch interactions with an electronic device. For example, Park teaches in fig. 28 and ¶ 189-194 that a rotation of a user’s finger may be determined in order to provide rotational inputs.
Regarding claim 15, Wilson teaches that the correction values included in the first correction data include correction values for a plurality of pixels of the image sensor; wherein the correction values for the plurality of pixels of the image sensor include correction values for pixels corresponding to the plurality of areas from among the plurality of pixels and correction values for pixels corresponding to remaining areas; wherein the correction values for pixels corresponding to the plurality of areas include correction values for the plurality of areas; and wherein the correction values for the plurality of areas are determined based on the correction values of the plurality of areas being applied to intensities of a plurality of infrared rays measured in the plurality of areas, and intensities of the plurality of infrared rays to which the correction values are applied become equal to each other (¶ 65-67; note that the values are “normalized” to provide uniformity, or become equal to each other in different areas on a per-pixel basis which covers the different areas of the imaged surface).
Regarding claim 16, Wilson does not specifically teach that the second correction data includes correction values for a plurality of pixels of the image sensor; and wherein the correction values of the correction data are based on the correction values of the first correction data and the correction values of the second correction data being multiplied on a pixel-by-pixel basis.
Cote, however, teaches that the second correction data includes correction values for a plurality of pixels of the image sensor; and wherein the correction values of the correction data are based on the correction values of the first correction data and the correction values of the second correction data being multiplied on a pixel-by-pixel basis (¶ 144, lens shading correction is performed on a per-pixel basis).
Note that because both references teach a per-pixel correction, the combination of Wilson in view of Cote results in a multiplication of the two correction values on a per-pixel basis in order to maintain the correct values for each pixel.
It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine the teachings of Wilson in view of Cote. The references teach improving the uniformity of an imaged area. Wilson teaches that light intensity non-uniformity occurs due to variations in a distance from infrared emitters (see Wilson ¶ 65). Cote further teaches that non-uniformity may further occur due to the lens shading of a camera module similar to that of Wilson. Cote further teaches compensating for intensity drop-offs due to such lens shading. As such, one would have been motivated to combine the teachings of Wilson in view of Cote in order to compensate for intensity drop-offs based on both the distance from infrared emitters and the lens shading effect, thereby improving detection of inputs provided by a user.
Claims 1, 3-7, 9-12, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wilson, in view of Cote, further in view of Keh et al., US 2013/0241820 A1, hereinafter “Keh”, and further in view of Park.
Regarding claim 1, Wilson teaches a projector (fig. 2, element 60, ¶ 33) comprising: a projection part (fig. 2, element 70) comprising a light source configured to project an image (¶ 47); an infrared emitter (fig. 2, element 66, ¶ 34) comprising circuitry configured to output an infrared ray to a projection area where the image is projected (¶ 34); an infrared camera module (fig. 2, element 68, ¶ 39) comprising an infrared camera configured to generate an image by photographing the projection area using an image sensor (¶ 39: “image of IR light reflected from objects”); a memory storing correction data including correction values for correcting brightness of the image (¶ 65-66, for example, I_max and I_min values are stored values); and at least one processor, comprising processor circuitry, individually and/or collectively configured to: correct brightness of an image obtained through the infrared camera module using the correction data (¶ 65-66, see I*(x,y) for example), and identify based on the corrected brightness whether a touch input is entered within the projection area (¶ 68), wherein the correction data is generated based on first correction data; wherein the first correction data includes correction values for correcting a decrease in intensity of the infrared ray according to a distance from the infrared emitter (¶ 65), wherein the correction values included in the first correction data are determined based on an intensity of the infrared measured in a plurality of areas in the projection area (¶ 65, see “per pixel basis”), and wherein the correction values of the correction data are based on the correction values of the first correction data (¶ 65-68).
Wilson does not specifically teach that the correction data is further generated based on second correction data, wherein the second correction data includes correction values for correcting lens shading; wherein the correction data are based on the correction values of first and second correction data.
Cote, however, teaches that correction data is generated based on correction values for correcting lens shading (¶ 144).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine the teachings of Wilson in view of Cote. The references teach improving the uniformity of an imaged area. Wilson teaches that light intensity non-uniformity occurs due to variations in a distance from infrared emitters (see Wilson ¶ 65). Cote further teaches that non-uniformity may further occur due to lens shading of a camera module similar to that of Wilson. Cote further teaches compensating for intensity drop-offs due to such lens shading. As such, one would have been motivated to combine the teachings of Wilson in view of Cote in order to compensate for intensity drop-offs based on both the distance from infrared emitters and the lens shading effect, thereby improving detection of inputs provided by a user.
Wilson and Cote do not specifically teach that the touch input is entered between the projection area and the projected infrared ray.
Keh, however, clearly teaches that the touch input is entered between the projection area and the projected infrared ray (fig. 1, see “projected image data” area, and fig. 2, wherein the touch input is entered between the projection area and the projected infrared ray emitted from element 260; see ¶ 29-30).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine the teachings of Wilson in view of Cote, as applied above and further in view of Keh. Wilson and Keh teach projection systems with touch input detection capabilities and while Wilson teaches a bottom-projected system, Keh simply teaches an alternative top-projected system. As such, one would have been motivated to make such a combination in order to utilize the top-projected system of Keh while expecting the same result of providing a projection system with touch capabilities.
Wilson, Cote and Keh do not teach using the correction values to identify a direction of the touch input based on a shape of an area in a plane of the image.
Park, however, teaches identifying a direction of the touch input based on a shape of an area in a plane of the image (¶ 147).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine the teachings of Wilson, Cote, Keh and Park. Wilson and Cote teach correcting touch input values. Wilson further teaches determining a shape of the input object (¶ 64) and Park further teaches using such touch input values to determine the direction of a touch input based on the detected shape of the touch input. One would have been motivated to make such a combination because Park clearly teaches that such direction information may be utilized to provide further input capabilities to a user, thus improving the touch interactions with an electronic device. For example, Park teaches in fig. 28 and ¶ 189-194 that a rotation of a user’s finger may be determined in order to provide rotational inputs.
Regarding claim 7, Wilson teaches a method of identifying a touch input of a projector, the method comprising: projecting an image (fig. 2, ¶ 47); outputting an infrared ray to a projection area where the image is projected using an infrared emitter (fig. 2, element 66, ¶ 34); generating an image by photographing the projection area using an infrared camera module including an infrared camera (fig. 2, element 68, ¶ 39); correcting brightness of the image obtained through the infrared camera module using correction data (¶ 65-66, for example, I_max and I_min values are used); and based on the corrected brightness, identifying whether a touch input for the projected image is entered within the projection area based on the corrected brightness (¶ 68), wherein the correction data is generated based on first correction data; wherein the first correction data includes correction values for correcting a decrease in intensity of the infrared ray according to a distance from the infrared emitter (¶ 65), wherein the correction values included in the first correction data are determined based on an intensity of the infrared measured in a plurality of areas in the projection area (¶ 65, see “per pixel basis”), and wherein the correction values of the correction data are based on the correction values of the first correction data (¶ 65-68).
Wilson does not specifically teach that the correction data is further generated based on second correction data, wherein the second correction data includes correction values for correcting lens shading; wherein the correction data are based on the correction values of first and second correction data.
Cote, however, teaches that correction data is generated based on correction values for correcting lens shading (¶ 144).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine the teachings of Wilson in view of Cote. The references teach improving the uniformity of an imaged area. Wilson teaches that light intensity non-uniformity occurs due to variations in a distance from infrared emitters (see Wilson ¶ 65). Cote further teaches that non-uniformity may further occur due to lens shading of a camera module similar to that of Wilson. Cote further teaches compensating for intensity drop-offs due to such lens shading. As such, one would have been motivated to combine the teachings of Wilson in view of Cote in order to compensate for intensity drop-offs based on both the distance from infrared emitters and the lens shading effect, thereby improving detection of inputs provided by a user.
Wilson and Cote do not specifically teach that the touch input is entered between the projection area and the projected infrared ray.
Keh, however, clearly teaches that the touch input is entered between the projection area and the projected infrared ray (fig. 1, see “projected image data” area, and fig. 2, wherein the touch input is entered between the projection area and the projected infrared ray emitted from element 260; see ¶ 29-30).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine the teachings of Wilson in view of Cote, as applied above and further in view of Keh. Wilson and Keh teach projection systems with touch input detection capabilities and while Wilson teaches a bottom-projected system, Keh simply teaches an alternative top-projected system. As such, one would have been motivated to make such a combination in order to utilize the top-projected system of Keh while expecting the same result of providing a projection system with touch capabilities.
Wilson, Cote and Keh do not teach using the correction values to identify a direction of the touch input based on a shape of an area in a plane of the image.
Park, however, teaches identifying a direction of the touch input based on a shape of an area in a plane of the image (¶ 147).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine the teachings of Wilson, Cote, Keh and Park. Wilson and Cote teach correcting touch input values. Wilson further teaches determining a shape of the input object (¶ 64) and Park further teaches using such touch input values to determine the direction of a touch input based on the detected shape of the touch input. One would have been motivated to make such a combination because Park clearly teaches that such direction information may be utilized to provide further input capabilities to a user, thus improving the touch interactions with an electronic device. For example, Park teaches in fig. 28 and ¶ 189-194 that a rotation of a user’s finger may be determined in order to provide rotational inputs.
Regarding claims 3 and 9, Wilson teaches that the correction values included in the first correction data include correction values for a plurality of pixels of the image sensor; wherein the correction values for the plurality of pixels of the image sensor include correction values for pixels corresponding to the plurality of areas from among the plurality of pixels and correction values for pixels corresponding to remaining areas; wherein the correction values for pixels corresponding to the plurality of areas include correction values for the plurality of areas; and wherein the correction values for the plurality of areas are determined based on the correction values of the plurality of areas being applied to intensities of a plurality of infrared rays measured in the plurality of areas, and intensities of the plurality of infrared rays to which the correction values are applied become equal to each other (¶ 65-67; note that the values are “normalized” to provide uniformity, or become equal to each other in different areas on a per-pixel basis which covers the different areas of the imaged surface).
Regarding claims 4 and 10, Wilson does not specifically teach that the second correction data includes correction values for a plurality of pixels of the image sensor; and wherein the correction values of the correction data are based on the correction values of the first correction data and the correction values of the second correction data being multiplied on a pixel-by-pixel basis.
Cote, however, teaches that the second correction data includes correction values for a plurality of pixels of the image sensor; and wherein the correction values of the correction data are based on the correction values of the first correction data and the correction values of the second correction data being multiplied on a pixel-by-pixel basis (¶ 144, lens shading correction is performed on a per-pixel basis).
Note that because both references teach a per-pixel correction, the combination of Wilson in view of Cote results in a multiplication of the two correction values on a per-pixel basis in order to maintain the correct values for each pixel.
It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine the teachings of Wilson in view of Cote. The references teach improving the uniformity of an imaged area. Wilson teaches that light intensity non-uniformity occurs due to variations in a distance from infrared emitters (see Wilson ¶ 65). Cote further teaches that non-uniformity may further occur due to the lens shading of a camera module similar to that of Wilson. Cote further teaches compensating for intensity drop-offs due to such lens shading. As such, one would have been motivated to combine the teachings of Wilson in view of Cote in order to compensate for intensity drop-offs based on both the distance from infrared emitters and the lens shading effect, thereby improving detection of inputs provided by a user.
Regarding claims 5, 11 and 17, Wilson teaches that at least one processor is configured to: based on infrared ray output from the infrared emitter being reflected by an object present in the projection area and received by the infrared camera module, obtain pixel values of the plurality of pixels of the image sensor; and correct brightness of the image by applying the correction values included in the correction data to the pixel values (fig. 2, also see ¶ 65-67).
Wilson and Cote do not specifically teach that the pixel values include Y component of YUV data.
Keh, however, teaches that the pixel values include Y component of YUV data (see ¶ 44).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine the teachings of Wilson and Cote, as applied above, further in view of Keh. Wilson clearly teaches normalizing the infrared light “intensity” in order to provide a uniform detection area for touch or hover detection and Keh further teaches the use of a Y component of a YUV image coordinate in order to perform such detection. Therefore, one would have been motivated to make such a combination in order to utilize the Y component which provides the intensity of light in order to perform the intensity normalization as provided by Wilson, thereby increasing the efficiency of the intensity normalization as taught by Wilson.
Regarding claims 6, 12 and 18, Wilson teaches identifying whether there is an area in the image in which brightness is equal to or greater than a specified value based on the corrected brightness; and based on identifying that there is an area in which brightness is equal to or greater than the specified value, identifying that the touch input is entered at a location of the projected image corresponding to the identified area (figs. 4-5; ¶ 50, 68-69 and 73).
Regarding claims 19 and 20, Wilson teaches that the correction values include correction values for a plurality of pixels of the image sensor for which the pixel values include a brightness data.
Wilson and Cote do not specifically teach that the pixel values include Y component of YUV data.
Keh, however, teaches that the pixel values include Y component of YUV data (see ¶ 44).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine the teachings of Wilson and Cote, as applied above, further in view of Keh. Wilson clearly teaches normalizing the infrared light “intensity” in order to provide a uniform detection area for touch or hover detection and Keh further teaches the use of a Y component of a YUV image coordinate in order to perform such detection. Therefore, one would have been motivated to make such a combination in order to utilize the Y component which provides the intensity of light in order to perform the intensity normalization as provided by Wilson, thereby increasing the efficiency of the intensity normalization as taught by Wilson.
Wilson, Cote and Keh do not teach using the correction values to identify the direction of the touch input based on a shape of the area in the plane of the image where a magnitude of the corrected Y component is equal to or greater than the predetermined value.
Park, however, teaches identifying a direction of the touch input based on a shape of an area in a plane of the image (¶ 147).
Note that Wilson teaches that a brightness or intensity threshold is used to determine the shape of the input object (¶ 64), and Keh clearly teaches the use of YUV components wherein Y is the brightness or intensity of incident light to make the same determination. Park further teaches the detected image of the touch input to determine a shape of the touch input (similar to Wilson) and further teaches using such a shape to determine the direction of the touch input. Accordingly, the combination of Wilson, Cote, Keh and Park teaches every limitation of the claim.
It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine the teachings of Wilson, Cote, Keh and Park. Wilson and Cote teach correcting touch input values. Wilson further teaches determining a shape of the input object (¶ 64) and Park further teaches using such touch input values to determine the direction of a touch input based on the detected shape of the touch input. One would have been motivated to make such a combination because Park clearly teaches that such direction information may be utilized to provide further input capabilities to a user, thus improving the touch interactions with an electronic device. For example, Park teaches in fig. 28 and ¶ 189-194 that a rotation of a user’s finger may be determined in order to provide rotational inputs.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEPEHR AZARI whose telephone number is (571)270-7903. The examiner can normally be reached weekdays from 11AM-7PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr Awad can be reached at (571) 272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SEPEHR AZARI/ Primary Examiner, Art Unit 2621