DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 07/02/2024 has/have been considered by the examiner.
Specification
Applicant is reminded of the proper language and format for an abstract of the disclosure.
The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details.
The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided.
The abstract of the disclosure is objected to because it has phrases “Embodiments relate to”. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 21-40 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-7, 13-18 of U.S. Patent No. US 11922711 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 21-35 of instant application 18594535 can be anticipated by claims 1-7, 13-18 of U.S. Patent No. US 11922711 B2 respectively. Although claims 36-40 of instant application 18594535 are non-transitory computer readable medium claims, their subject matters are disclosed by claims 3-7 of U.S. Patent No. US 11922711 B2 respectively.
Instant Application 18594535
U.S. Patent No. US 11922711 B2
21. (New) A system comprising: one or more imaging devices; and a processor configured to: receive region image data of a region captured by the one or more imaging devices over a period of time; identify a plurality of objects in the region based on the region image data, wherein at least one object of the plurality of objects moves relative to the one or more imaging devices over the period of time; generate a ranking of the plurality of objects; receive, at a first time determined at least in part based on the ranking, a first set of image data of the region corresponding to the at least one object captured by the one or more imaging devices; determine a first location associated with the at least one object using the first set of image data; receive, at a second time determined at least in part based on the ranking, a second set of image data of the region corresponding to the at least one object captured by the one or more imaging devices; determine a second location associated with the at least one object using the second set of image data; and generate a model of the region based on the first location and the second location associated with the at least one object.
22. (New) The system of claim 21, wherein the ranking of the at least one object is based on at least one of (1) relevance of the at least one object to an application or activity, (2) user input indicating interest levels of the at least one object, or (3) amounts of user interaction with the at least one object.
1. A system comprising: one or more imaging devices; and a processor configured to: receive search region image data of a search region captured by the one or more imaging devices over a period of time; identify objects in the search region based on the search region image data, wherein the objects move relative to the imaging devices over the period of time; generate a ranking of the objects based on at least one of (1) relevance of the objects to an application or activity, (2) user input indicating interest levels of the objects, or (3) amounts of user interaction with the objects; determine a subset of the objects based on the ranking; determine tracking regions corresponding to the subset of the objects, each of the tracking regions being smaller than the search region; and receive, at a first time, a first set of tracking region image data of the tracking regions corresponding to the subset of the objects captured by the one or more imaging devices; determine a first set of locations associated with the subset of the objects using the set of tracking region image data; receive, at a second time, a second set of tracking region image data of the tracking regions corresponding to the subset of objects captured by the one or more imaging devices; determine a second set of locations associated with the subset of the objects using the second set of tracking region image data; and generate a model of the search region based on the first set of locations and the second set of locations associated with the subset of the objects
23. (New) The system of claim 22, wherein the processor is further configured to determine the amounts of user interaction with the at least one object based on user eye tracking.
2. The system of claim 1, wherein the processor is configured to determine the amounts of user interaction with the objects based on user eye tracking.
24. (New) The system of claim 22, wherein the processor is further configured to determine the amounts of user interaction with the at least one object based on user hand tracking.
3. The system of claim 1, wherein the processor is configured to determine the amounts of user interaction with the objects based on user hand tracking.
25. (New) The system of claim 21, wherein the one or more imaging devices include multiple imaging devices, each imaging device configured to capture the region image data for the at least one object of a subset of the at least one object.
4. The system of claim 1, wherein the one or more imaging devices include multiple imaging devices, each imaging device configured to capture tracking region image data for one object of the subset of objects.
26. (New) The system of claim 21, wherein the processor is further configured to combine objects and models of multiple regions into an aggregated model of an environment.
5. The system of claim 1, wherein the processor is further configured to combine objects and models of multiple search regions into an aggregated model of an environment.
27. (New) The system of claim 21, wherein the one or more imaging devices include: a first imaging device configured to generate search region image data of the region; and a second imaging device configured to generate tracking region image data of the region.
7. The system of claim 6, wherein the first imaging device is a depth camera and the second imaging device is a time of flight sensor.
28. (New) A method, comprising: receiving region image data of a region captured by the one or more imaging devices over a period of time; identifying a plurality of objects in the region based on the region image data, wherein at least one object of the plurality of objects moves relative to the one or more imaging devices over the period of time; generating a ranking of the plurality of objects; receiving, at a first time determined at least in part based on the ranking, a first set of image data of the region corresponding to the at least one object captured by the one or more imaging devices; determining a first location associated with the at least one object using the first set of image data; receiving, at a second time determined at least in part based on the ranking, a second set of image data of the region corresponding to the at least one object captured by the one or more imaging devices; determining a second location associated with the at least one object using the second set of image data; and generating a model of the region based on the first location and the second location associated with the at least one object.
29. (New) The method of claim 28, wherein the ranking of the at least one object is based on at least one of (1) relevance of the at least one object to an application or activity, (2) user input indicating interest levels of the at least one object, or (3) amounts of user interaction with the at least one object.
13. A method, comprising: receiving search region image data of a search region captured by one or more imaging devices over a period of time; identifying objects in the search region based on the search region image data, wherein the objects move relative to the imaging devices over the period of time; generating a ranking of the objects based on at least one of (1) relevance of the objects to an application or activity, (2) user input indicating interest levels of the objects, or (3) amounts of user interaction with the objects; determining a subset of the objects based on the ranking; determining tracking regions corresponding to the subset of the objects, each of the tracking regions being smaller than the search region; and receiving, at a first time, a first set of tracking region image data of the tracking regions corresponding to the subset of the objects captured by the one or more imaging devices; determining a first set of locations associated with the subset of the objects using the set of tracking region image data; receiving, at a second time, a second set of tracking region image data of the tracking regions corresponding to the subset of objects captured by the one or more imaging devices; determining a second set of locations associated with the subset of the objects using the second set of tracking region image data; and generating a model of the search region based on the first set of locations and the second set of locations associated with the subset of the objects.
30. (New) The method of claim 29, further comprising determining the amounts of user interaction with the at least one object based on user eye tracking.
14. The method of claim 13, further comprising determining the amounts of user interaction with the objects based on user eye tracking.
31. (New) The method of claim 29, further comprising determining the amounts of user interaction with the at least one object based on user hand tracking.
15. The method of claim 13, further comprising determining the amounts of user interaction with the objects based on user hand tracking.
32. (New) The method of claim 28, further comprising combining objects and models of multiple regions into an aggregated model of an environment.
16. The method of claim 13, further comprising combining objects and models of multiple search regions into an aggregated model of an environment.
33. (New) The method of claim 28, further comprising: generating, by a depth camera of the one or more imaging devices, region image data; and generating, by a time of flight sensor of the one or more imaging devices, tracking region image data of the region.
17. The method of claim 13, further comprising: generating, by a depth camera of the one or more imaging devices, the search region image data; and generating, by a time of flight sensor of the one or more imaging devices, tracking region image data of a first tracking region of the tracking regions.
34. (New) A non-transitory computer readable medium comprising stored instructions that, when executed by a processor, configure the processor to: receive region image data of a region captured by the one or more imaging devices over a period of time; identify a plurality of objects in the region based on the region image data, wherein at least one object of the plurality of objects moves relative to the one or more imaging devices over the period of time; generate a ranking of the plurality of objects; receive, at a first time determined at least in part based on the ranking, a first set of image data of the region corresponding to the at least one object captured by the one or more imaging devices; determine a first location associated with the at least one object using the first set of image data; receive, at a second time determined at least in part based on the ranking, a second set of image data of the region corresponding to the at least one object captured by the one or more imaging devices; determine a second location associated with the at least one object using the second set of image data; and generate a model of the region based on the first location and the second location associated with the at least one object.
35. (New) The non-transitory computer readable medium of claim 34, wherein the ranking of the at least one object is based on at least one of (1) relevance of the at least one object to an application or activity, (2) user input indicating interest levels of the at least one object, or (3) amounts of user interaction with the at least one object.
18. A non-transitory computer readable medium comprising stored instructions that, when executed by a processor, configures the processor to: receive search region image data of a search region captured by one or more imaging devices over a period of time; identify objects in the search region based on the search region image data, wherein the objects move relative to the imaging device over the period of time; generate a ranking of the objects based on at least one of (1) relevance of the objects to an application or activity, (2) user input indicating interest levels of the objects, or (3) amounts of user interaction with the objects; determine a subset of the objects based on the ranking; determine tracking regions corresponding to the subset of the objects, each of the tracking regions being smaller than the search region; and receive, at a first time, a first set of tracking region image data of the tracking regions corresponding to the subset of the objects captured by the one or more imaging devices; determine a first set of locations associated with the subset of the objects using the set of tracking region image data; receive, at a second time, a second set of tracking region image data of the tracking regions corresponding to the subset of objects captured by the one or more imaging devices; determine a second set of locations associated with the subset of the objects using the set of tracking region image data; and generate a model of the search region based on the first set of locations and the second set of locations associated with the subset of the objects.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 21, 25-28, 32, 34, and 38-40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee (US 20140241614 A1) in view of Yamashita (US 20140010409 A1 ).
-Regarding claim 21, Lee discloses a system comprising (Abstract; FIGS. 1-12): one or more imaging devices; and a processor configured to: receive region image data of a region captured by the one or more imaging devices over a period of time (FIG. 1, cameras 114, 116, 118, wide angle imaging camera data 134, narrow angle camera image data 136, local environment 112; FIGS. 4, 7-8; [0021], “support location-based functionality, such as SLAM or AR”; [0043], “time-of-flight imaging cameras”; [0047]; [0058], “SLAM … over time …”) ; identify a plurality of objects in the region based on the region image data, wherein at least one object of the plurality of objects moves relative to the one or more imaging devices over the period of time (FIGS. 1, 7-8; [0017], “determination of a relative position or relative orientation … to support … augmented reality (AR) functionality, visual odometry or other simultaneous localization and mapping (SLAM) … identify spatial features representing objects in the local environment and their distances”; [0018], “identification of the relative position/orientation of objects in the local environment”; [0023]; [0025], “calculate the depths of the objects, that is, the distances of the objects from the electronic device …”; [0033];[0051];[0058]); receive, a set of image data of the region corresponding to the at least one object captured by the one or more imaging devices (FIG.1, data 134; FIG. 4, image sensors 408, 116;); determine a location associated with the at least one object using the set of image data (FIG. 4, location 420);
Lee does not disclose generating a ranking of the objects; receiving, at a first time determined at least in part based on the ranking, a first set of image data of the region; determining a first set of locations associated with the at least one object using the first set of image data; receiving, at a second time determined at least in part based on the ranking, a second set of image data of the region; determining a second location associated with the at least one object using the second set of image data; and generating a model of the region based on the first location and the second location associated with the at least one object.
In the same field of endeavor, Yamashita teaches a method to track a target object in a time-series image wherein the object moves relative to the imaging devices over the period of time (Yamashita: [0160], "the target object is detected and tracked from the captured moving image ... applicable to the tracking of a target object moving in an obtained time-series image"). Yamashita further teaches generating a ranking of the objects (Yamashita: FIGS. 2, unit 32; [0141], “a ranking from the top to the first predetermined number-th”; [0169]); receiving, at a first time determined at least in part based on the ranking, a first set of image data of the region; determining a first set of locations associated with the at least one object using the first set of image data (Yamashita: FIG. 1; FIG.2, unit 32; [0141]; FIG. 8, S1; FIG. 9; [0123]; [0126]; [0162], “a time-series image including a plurality of frames includes: a location information … in a first frame …”); receiving, at a second time determined at least in part based on the ranking, a second set of image data of the region; determining a second location associated with the at least one object using the second set of image data (Yamashita: FIGS. 1-2; FIG. 8, S3-S4; FIG. 10; [0126]-[0127]; [0162], “sets a plurality of different search locations in a second frame which is any one of frames following the first frame …”); and generating a model of the region based on the first location and the second location associated with the at least one object (Yamashita: FIG. 10, S22; [0138]-[0143]).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Lee with the teaching of Yamashita by utilizing the process of object tracking of Yamashita in order to ensure both capability in following a target object and high-speed processing (Yamashita: [0022]).
-Regarding claim 28, Lee discloses a method comprising (Abstract; FIGS. 1-12): receive region image data of a region captured by the one or more imaging devices over a period of time (FIG. 1, cameras 114, 116, 118, wide angle imaging camera data 134, narrow angle camera image data 136, local environment 112; FIGS. 4, 7-8; [0021], “support location-based functionality, such as SLAM or AR”; [0043], “time-of-flight imaging cameras”; [0047]; [0058], “SLAM … over time …”) ; identify a plurality of objects in the region based on the region image data, wherein at least one object of the plurality of objects moves relative to the one or more imaging devices over the period of time (FIGS. 1, 7-8; [0017], “determination of a relative position or relative orientation … to support … augmented reality (AR) functionality, visual odometry or other simultaneous localization and mapping (SLAM) … identify spatial features representing objects in the local environment and their distances”; [0018], “identification of the relative position/orientation of objects in the local environment”; [0023]; [0025], “calculate the depths of the objects, that is, the distances of the objects from the electronic device …”; [0033];[0051];[0058]); receive, a set of image data of the region corresponding to the at least one object captured by the one or more imaging devices (FIG.1, data 134; FIG. 4, image sensors 408, 116;); determine a location associated with the at least one object using the set of image data (FIG. 4, location 420);
Lee does not disclose generating a ranking of the objects; receiving, at a first time determined at least in part based on the ranking, a first set of image data of the region; determining a first set of locations associated with the at least one object using the first set of image data; receiving, at a second time determined at least in part based on the ranking, a second set of image data of the region; determining a second location associated with the at least one object using the second set of image data; and generating a model of the region based on the first location and the second location associated with the at least one object.
In the same field of endeavor, Yamashita teaches a method to track a target object in a time-series image wherein the object moves relative to the imaging devices over the period of time (Yamashita: [0160], "the target object is detected and tracked from the captured moving image ... applicable to the tracking of a target object moving in an obtained time-series image"). Yamashita further teaches generating a ranking of the objects (Yamashita: FIGS. 2, unit 32; [0141], “a ranking from the top to the first predetermined number-th”; [0169]); receiving, at a first time determined at least in part based on the ranking, a first set of image data of the region; determining a first set of locations associated with the at least one object using the first set of image data (Yamashita: FIG. 1; FIG.2, unit 32; [0141]; FIG. 8, S1; FIG. 9; [0123]; [0126]; [0162], “a time-series image including a plurality of frames includes: a location information … in a first frame …”); receiving, at a second time determined at least in part based on the ranking, a second set of image data of the region; determining a second location associated with the at least one object using the second set of image data (Yamashita: FIGS. 1-2; FIG. 8, S3-S4; FIG. 10; [0126]-[0127]; [0162], “sets a plurality of different search locations in a second frame which is any one of frames following the first frame …”); and generating a model of the region based on the first location and the second location associated with the at least one object (Yamashita: FIG. 10, S22; [0138]-[0143]).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Lee with the teaching of Yamashita by utilizing the process of object tracking of Yamashita in order to ensure both capability in following a target object and high-speed processing (Yamashita: [0022]).
-Regarding claim 34, Lee discloses non-transitory computer readable medium comprising stored instructions that, when executed by a processor (FIG. 8), configure the processor to (Abstract; FIGS. 1-12): receive region image data of a region captured by the one or more imaging devices over a period of time (FIG. 1, cameras 114, 116, 118, wide angle imaging camera data 134, narrow angle camera image data 136, local environment 112; FIGS. 4, 7-8; [0021], “support location-based functionality, such as SLAM or AR”; [0043], “time-of-flight imaging cameras”; [0047]; [0058], “SLAM … over time …”) ; identify a plurality of objects in the region based on the region image data, wherein at least one object of the plurality of objects moves relative to the one or more imaging devices over the period of time (FIGS. 1, 7-8; [0017], “determination of a relative position or relative orientation … to support … augmented reality (AR) functionality, visual odometry or other simultaneous localization and mapping (SLAM) … identify spatial features representing objects in the local environment and their distances”; [0018], “identification of the relative position/orientation of objects in the local environment”; [0023]; [0025], “calculate the depths of the objects, that is, the distances of the objects from the electronic device …”; [0033];[0051];[0058]); receive, a set of image data of the region corresponding to the at least one object captured by the one or more imaging devices (FIG.1, data 134; FIG. 4, image sensors 408, 116;); determine a location associated with the at least one object using the set of image data (FIG. 4, location 420);
Lee does not disclose generating a ranking of the objects; receiving, at a first time determined at least in part based on the ranking, a first set of image data of the region; determining a first set of locations associated with the at least one object using the first set of image data; receiving, at a second time determined at least in part based on the ranking, a second set of image data of the region; determining a second location associated with the at least one object using the second set of image data; and generating a model of the region based on the first location and the second location associated with the at least one object.
In the same field of endeavor, Yamashita teaches a method to track a target object in a time-series image wherein the object moves relative to the imaging devices over the period of time (Yamashita: [0160], "the target object is detected and tracked from the captured moving image ... applicable to the tracking of a target object moving in an obtained time-series image"). Yamashita further teaches generating a ranking of the objects (Yamashita: FIGS. 2, unit 32; [0141], “a ranking from the top to the first predetermined number-th”; [0169]); receiving, at a first time determined at least in part based on the ranking, a first set of image data of the region; determining a first set of locations associated with the at least one object using the first set of image data (Yamashita: FIG. 1; FIG.2, unit 32; [0141]; FIG. 8, S1; FIG. 9; [0123]; [0126]; [0162], “a time-series image including a plurality of frames includes: a location information … in a first frame …”); receiving, at a second time determined at least in part based on the ranking, a second set of image data of the region; determining a second location associated with the at least one object using the second set of image data (Yamashita: FIGS. 1-2; FIG. 8, S3-S4; FIG. 10; [0126]-[0127]; [0162], “sets a plurality of different search locations in a second frame which is any one of frames following the first frame …”); and generating a model of the region based on the first location and the second location associated with the at least one object (Yamashita: FIG. 10, S22; [0138]-[0143]).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Lee with the teaching of Yamashita by utilizing the process of object tracking of Yamashita in order to ensure both capability in following a target object and high-speed processing (Yamashita: [0022]).
-Regarding claims 25 and 38, Lee in view of Yamashita teaches the system of claim 21 and the non-transitory computer readable medium of claim 34. The combination further teaches capturing the region image data for the at least one object of a subset of the at least one object (Lee: FIG. 1, cameras 114, 116, environment 112; FIG. 4; FIG. 7, blocks 702, 704; [0047], “capture of wide angle view (WAV) image data for the local environment 112 … capture of narrow angle view (NAV) image data”).
-Regarding claims 26, 32 and 39, Lee in view of Yamashita teaches the system of claim 21, the method of claim 28, and the non-transitory computer readable medium of claim 34. The combination further teaches capturing the region image data for the at least one object of a subset of the at least one object (Lee: FIG. 7, blocks 722, 724; [0064], “determines the AR information to be graphically presented to the user as a graphical overlay for the image frame generated or selected at block 720 and provides the image frame and the graphical overlay for display at the electronic device 100 at block 724 …”; See also Yamashita: FIG. 6).
-Regarding claims 27 and 40, Lee in view of Yamashita teaches the system of claim 21 and the non-transitory computer readable medium of claim 34. The combination further teaches wherein the one or more imaging devices include: a first imaging device configured to generate search region image data of the region (Lee: FIG. 1, environment 112, camera 114; FIG. 4; FIG. 7, block 702; [0047]); and a second imaging device configured to generate tracking region image data of the region (Lee: FIG. 1, environment 112, camera 116; FIG. 4; FIG. 7, block 704; [0047]).
Claim(s) 22-23, 29-30 and 35-36 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee (US 20140241614 A1) in view of Yamashita (US 20140010409 A1 ), and further in view of Ahuja et al (US 20210132691 A1), hereinafter Ahuja.
-Regarding to claims 22, 29 and 35, Lee in view of Yamashita teaches the system of claim 21, the method of claim 28, and the non-transitory computer readable medium of claim 34.
Lee in view of Yamashita does not teach wherein the ranking of the at least one object is based on at least one of (1) relevance of the at least one object to an application or activity, (2) user input indicating interest levels of the at least one object, or (3) amounts of user interaction with the at least one object.
However, Ahuja is an analogous art pertinent to the problem to be solved in this application and teaches a method for eye movement tracking (Ahuja: FIGS. 2, 4; [0005]; [0045], “tracks movement of the eye or eyes of a user while viewing the displayed set of objects”; [0064]). Ahuja further teaches wherein the ranking of the at least one object is based on at least one of (1) relevance of the at least one object to an application or activity, (2) user input indicating interest levels of the at least one object, or (3) amounts of user interaction with the at least one object (Ahuja: [0058], “rank objects using object attribute similarity … to enable the loading of related objects”; [0076], “selected objects are ranked according to likelihood to be of interest to the viewing user”).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify the teaching of Lee in view of Yamashita with the teaching of Ahuja by ranking of the at least one object is based on at least one of (1) relevance of the at least one object to an application or activity, (2) user input indicating interest levels of the at least one object, or (3) amounts of user interaction with the at least one object in order to provide analysis and tracking of an object interested by the users or related to an activity (Ahuja: [0076]-[0077]).
-Regarding to claims 23, 30 and 36, Lee in view of Yamashita, and further in view of Ahuja teaches the system of claim 22, the method of claim 29, and the non-transitory computer readable medium of claim 35. The modification further teaches determining the amounts of user interaction with the at least one object based on user eye tracking (Lee: [0066], “based on changes in the position of the user's head (or the user's eyes) relative to the display 108 … react to head/eye position changes as represented in the head tracking or eye tracking information …”; [0080], “based on user head position or eye position represented in the received head tracking information”; FIGS. 8-7; See also Ahuja: [0005]; [0045]; [0064]).
Claim(s) 24, 31 and 37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee (US 20140241614 A1) in view of Yamashita (US 20140010409 A1 ) in view of Yamashita (US 20140010409 A1), and further in view of Ahuja et al (US 20210132691 A1), hereinafter Ahuja, in view of Shields (US 10460159 B1).
-Regarding to claims 24, 31 and 37, Lee in view of Yamashita, and further in view of Ahuja teaches the system of claim 22, the method of claim 29, and the non-transitory computer readable medium of claim 35.
Lee in view of Yamashita, and further in view of Ahuja does not teach determining the amounts of user interaction with the objects based on based on user hand tracking.
However, Shields is an analogous art pertinent to the problem to be solved in this application and teaches a method enabling a first user at a video endpoint to communicate with a far-end user at a communication device via a relay service providing translation services for the first user (Shields: Abstract; FIGS. 1-4). Shields further teaches determining the amounts of user interaction with the objects based on based on user hand tracking (Shields: FIGS. 3-4; Col. 15, lines 8-16).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify the teaching of Lee in view of Yamashita, and further in view of Ahuja with the teaching of Shields by utilizing hand tracking process of Shields in order to communicate with a far-end user at a communication device (Shields: Abstract).
Claim(s) 33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee (US 20140241614 A1) in view of Yamashita (US 20140010409 A1 ), and further in view of Im et al. (US 20130050425 A1), hereinafter Im.
-Regarding claim 33, Lee in view of Yamashita teaches the method claim 28.
Lee in view of Yamashita does not teach wherein the first imaging device is a depth camera and the second imaging device is a time of flight sensor.
However, Im is an analogous art pertinent to the problem to be solved in this application and teaches gesture-based user interface method to recognize a user gesture based on the depth image (Im: Abstract; FIGS. 1-18). Im further teaches wherein the first imaging device is a depth camera and the second imaging device is a time of flight sensor (Im: [0024]).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify the teaching of Lee in view of Yamashita with the teaching of Im by utilizing the TOF type camera and the depth camera in order to solve inconvenience encountered when a user must operate a separate input unit so as to control an apparatus and provide a more intuitive user interface (Im: [0005])).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAO LIU whose telephone number is (571)272-4539. The examiner can normally be reached Monday-Thursday and Alternate Fridays 8:30-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XIAO LIU/Primary Examiner, Art Unit 2664