Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 3 objected to because of the following informalities: Claim 3 is a dependent claim of itself. For examining purpose, claim 3 depends on claim 1. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-11 and 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Cheong et al., US PGPUB 20140232727 hereinafter referenced as Cheong in view of Karakotsios et al., US PGPUB 20150062006 hereinafter referenced as Karakotsios.
As to claim 1, Cheong discloses a method of an electronic device, the method comprising: displaying a first image (e.g., the four attributes 601, fig. 6A);
receiving a gesture drawing input to the first image (e.g., gesture 603, fig. 6A);
identifying a shape of the gesture drawing input and a position on which the gesture drawing input is received (e.g., quadrangle 609, fig. 6C);
identifying an object, positioned at the identified position, included in the first image and space information related to a space on the first image corresponding to the shape of the gesture drawing input on the first image ([0063] For example, the attribute information can include coordinate information for generating the writing data, center coordinates of the writing data, vertex coordinates of the writing data, and angle information).
Cheong does not specifically disclose determining, based on the space information and the identified object, lighting information corresponding to the identified shape and position of the gesture drawing input, the lighting information indicating a shape and a size of a virtual lighting; and
generating a second image by applying the virtual lighting to the identified object included in first image.
However, in the same endeavor, Karakotsios discloses determining, based on the space information and the identified object, lighting information corresponding to the identified shape and position of the gesture drawing input, the lighting information indicating a shape and a size of a virtual lighting (mapping pattern 254 on the virtual keyboard 252, wherein the keys under the motion can be highlighted); and
generating a second image by applying the virtual lighting to the identified object included in first image (wherein the highlighted keys under the motion read as a second image).
Therefore, it would have been obvious to one of ordinary skill in the art to modify the disclosure of Cheong to further include Karakotsios’s object tracking method, in order to activate desired function effectively.
As to claim 11, Cheong discloses an electronic device comprising: a touch display (e.g., touch screen 260, fig. 2);
a memory comprising an instruction (memory 210, fig. 2); and
at least one processor comprising processing circuitry, wherein at least one processor is configured to execute the instruction (processor unit 220, fig. 2), and
at least one processor, individually and/or collectively, is configured to cause the electronic device to: display a first image through the display (e.g., the four attributes 601, fig. 6A);
receive a gesture drawing input to the first image (e.g., gesture 603, fig. 6A);
identify a shape of the gesture drawing input and a position on which the gesture drawing input is received (e.g., quadrangle 609, fig. 6C);
identify an object, positioned at the identified position, included in the first image and space information related to a space on the first image corresponding to the shape of the gesture drawing input on the first image ([0063] For example, the attribute information can include coordinate information for generating the writing data, center coordinates of the writing data, vertex coordinates of the writing data, and angle information);
Cheong does not specifically disclose determine, based on the space information and the identified object, lighting information corresponding to the identified shape and position of the gesture drawing input, the lighting information indicating a shape and a size of a virtual lighting; and generate a second image by applying the virtual light to the first image.
However, in the same endeavor, Karakotsios discloses determine, based on the space information and the identified object, lighting information corresponding to the identified shape and position of the gesture drawing input, the lighting information indicating a shape and a size of a virtual lighting (mapping pattern 254 on the virtual keyboard 252, wherein the keys under the motion can be highlighted); and
generate a second image by applying the virtual light to the first image (wherein the highlighted keys under the motion read as a second image).
Therefore, it would have been obvious to one of ordinary skill in the art to modify the disclosure of Cheong to further include Karakotsios’s object tracking method, in order to activate desired function effectively.
2. (Cancelled)
As to claim 3, the combination of Cheong and Karakotsios discloses the method of claim 1. The combination further discloses the lighting information is applied, based on a spatial position relationship between the identified object and the virtual lighting, to the identified object, and wherein the spatial position relationship is identified based on image segmentation or depth information of the first image (Karakotsios, mapping pattern 254 on the virtual keyboard 252, wherein the keys under the motion can be highlighted).
As to claim 4, the combination of Cheong and Karakotsios discloses the method of claim 3. The combination further discloses displaying the lighting information on the first image through a display; and receiving an input of selection of the displayed lighting information (Karakotsios, [0017] FIG. 2(b) illustrates how the motion 212 of the user's finger in FIG. 2(a) can be mapped to corresponding locations on the virtual keyboard 252 based on image information captured by the camera 206 of the device 204 even though the finger is a distance from the display screen).
As to claim 5, the combination of Cheong and Karakotsios discloses the method of claim 4. The combination further discloses the lighting information comprises two or more pieces of lighting selection information (Karakotsios, [0029] In at least some embodiments, a determination that the lighting is not sufficient can cause one or more types of illumination to be activated on the device).
As to claim 6, the combination of Cheong and Karakotsios discloses the method of claim 5. The combination further discloses the input of selection comprises an input of selection of turning on or off the two or more piece of lighting selection information, a detailed characteristic selection input, or a detailed characteristic adjustment input (Karakotsios, [0029] In at least some embodiments, a determination that the lighting is not sufficient can cause one or more types of illumination to be activated on the device).
As to claim 7, the combination of Cheong and Karakotsios discloses the method of claim 4. The combination further discloses identifying, based on the space information, two or more pieces of lighting information of the first image; displaying the two or more pieces of lighting information on the first image through the display; and receiving an input of selection of the two or more pieces of lighting
Information (Karakotsios, [0041] In one example, an LED or other source of illumination is activated (e.g., flashed or strobed) during a time of image capture of at least one camera or sensor).
As to claim 8, the combination of Cheong and Karakotsios discloses the method of claim 1. The combination further discloses the space information comprises identification information of a space corresponding to the first image, style information, and physical space information (Cheong, [0063] For example, the attribute information can include coordinate information for generating the writing data, center coordinates of the writing data, vertex coordinates of the writing data, and angle information).
As to claim 9, the combination of Cheong and Karakotsios discloses the method of claim 8. The combination further discloses the lighting information comprises type information, shape information, size information, or characteristic information of the virtual lighting, and wherein one or more of the type information, the shape information, the size information, or the characteristic information of the virtual lighting are determined based on one or more of the identification information of the space, the style information, or the physical space information (Karakotsios, [0017] FIG. 2(b) illustrates how the motion 212 of the user's finger in FIG. 2(a) can be mapped to corresponding locations on the virtual keyboard 252 based on image information captured by the camera 206 of the device 204 even though the finger is a distance from the display screen).
As to claim 10, the combination of Cheong and Karakotsios discloses the method of claim 9. The combination further discloses providing, through a display, a menu for selecting or adjusting one or more of the type information, the shape information, the size information, or the characteristic information of the light (Cheong, [0202] When detecting the user's touch or gesture for selecting the menu, the electronic device can prepare to analyze the attribute information and the type information by executing the voice recognition function).
12. (Cancelled)
As to claim 13, the combination of Cheong and Karakotsios discloses the electronic device of claim 11. The combination further discloses the at least one processor, individually and/or collectively, is configured to cause the electronic device to: apply, based on a spatial position relationship between the identified object
and the virtual lighting, the virtual lighting to the identified object; and identify, based on image segmentation or depth information of the first image, the spatial position relationship (Karakotsios, mapping pattern 254 on the virtual keyboard 252, wherein the keys under the motion can be highlighted).
As to claim 14, the combination of Cheong and Karakotsios discloses the electronic device of claim 11. The combination further discloses the at least one processor, individually and/or collectively, is configured to cause the electronic device to: display the lighting information on the first image through the display; and
receive an input of selection of the displayed space information (Karakotsios, [0017] FIG. 2(b) illustrates how the motion 212 of the user's finger in FIG. 2(a) can be mapped to corresponding locations on the virtual keyboard 252 based on image information captured by the camera 206 of the device 204 even though the finger is a distance from the display screen).
As to claim 15, the combination of Cheong and Karakotsios discloses the electronic device of claim 14. The combination further discloses the lighting information comprises one or more pieces of lighting selection information (Karakotsios, [0029] In at least some embodiments, a determination that the lighting is not sufficient can cause one or more types of illumination to be activated on the device).
As to claim 16, the combination of Cheong and Karakotsios discloses the electronic device of claim 15. The combination further discloses the input of
selection comprises an input of selection of turning on or off the two or more pieces of
lighting selection information, a detailed characteristic selection input, or a detailed
characteristic adjustment input (Karakotsios, [0029] In at least some embodiments, a determination that the lighting is not sufficient can cause one or more types of illumination to be activated on the device).
As to claim 17, the combination of Cheong and Karakotsios discloses the electronic device of claim 14. The combination further discloses the at least one processor, individually and/or collectively, is configured to cause the electronic device to: identify, based on the space information, two or more images of lighting information of the first image; display the two or more pieces of lighting information on the first image through the display; and receive an input of selection of the two or more pieces of lighting information (Karakotsios, [0041] In one example, an LED or other source of illumination is activated (e.g., flashed or strobed) during a time of image capture of at least one camera or sensor).
As to claim 18, the combination of Cheong and Karakotsios discloses the electronic device of claim 11. The combination further discloses the space information comprises identification information of a space corresponding to the first image, style information, and a physical space information (Cheong, [0063] For example, the attribute information can include coordinate information for generating the writing data, center coordinates of the writing data, vertex coordinates of the writing data, and angle information).
As to claim 19, the combination of Cheong and Karakotsios discloses the electronic device of claim 18. The combination further discloses the lighting information comprises type information, shape information, size information, or characteristic information of the virtual lighting, and wherein the at least one processor, individually and/or collectively, is configured to cause the electronic device to determine, based on one or more of the identification of the space, the stye information, or the physical space information, one or more of the type information, the shape information, the size information, or the characteristic information of the virtual lighting (Karakotsios, [0017] FIG. 2(b) illustrates how the motion 212 of the user's finger in FIG. 2(a) can be mapped to corresponding locations on the virtual keyboard 252 based on image information captured by the camera 206 of the device 204 even though the finger is a distance from the display screen).
As to claim 20, the combination of Cheong and Karakotsios discloses the electronic device of claim 19. The combination further discloses the at
least one processor, individually and/or collectively, is configured to cause the electronic
device to provide, through the display, a menu for selecting or adjusting one or more of
the type information, the shape information, the size information, or the characteristic
information of the light (Cheong, [0202] When detecting the user's touch or gesture for selecting the menu, the electronic device can prepare to analyze the attribute information and the type information by executing the voice recognition function).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Wild et al., US PGPUB 20170368938 discloses the present disclosure relates to an infotainment system, to a locomotion vehicle, and to a user interface. The user interface may comprise: an input device for detecting swipe gestures of a user guided by a structure in respect of two dimensions, called “finger strip” in the following, a display device, and an evaluating unit. The evaluating unit may be configured to evaluate swipe gestures detected by means of the finger strip for substantially continuously variable adjustment to an output variable and to evaluate tap inputs detected by means of the finger strip for selection and/or definition of favorites.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAHLU OKEBATO whose telephone number is (571)270-3375. The examiner can normally be reached Mon - Fri 8:00 - 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, WILLIAM BODDIE can be reached at 571-272-0666. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAHLU OKEBATO/Primary Examiner, Art Unit 2625 3/14/2026