DETAILED ACTION
Status of Claims
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 9, 2026 has been entered.
Claims 1 and 13-16 has been added.
Claims 1-16 are currently pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendments
The rejection of claims 1-16 under 35 USC § 101 is maintained. Please see the Response to Arguments.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Per MPEP 2106.03 Eligibility Step 1: The Four Categories of Statutory Subject Matter [R-07.2022]. Step 1 is directed to determining whether or not the claims fall within a statutory class. Herein, claims 1-13 and 16 falls within statutory class of a machine, claim 14 falls within statutory category of a process and claim 15 falls within statutory class of an article of manufacturing. Hence, the claims qualify as potentially eligible subject matter under 35 U.S.C §101. With Step 1 being directed to a statutory category, per MPEP 2106.04 Eligibility Step 2A: Whether a Claim is Directed to a Judicial Exception [R-07.2022].. Step 2 is the two-part analysis from Alice Corp. (also called the Mayo test). The 2019 PEG makes two changes in Step 2A: It sets forth new procedure for Step 2A (called “revised Step 2A”) under which a claim is not “directed to” a judicial exception unless the claim satisfies a two-prong inquiry. The two-prong inquiry is as follows: Prong One: evaluate whether the claim recites a judicial exception (an abstract idea enumerated in the 2019 PEG, a law of nature, or a natural phenomenon). If claim recites an exception, then Prong Two: evaluate whether the claim recites additional elements that integrate the exception into a practical application of the exception. The claim(s) recite(s) the following abstract idea indicated by non-boldface font and additional limitations indicated by boldface font:
Claims 1, 14 and 15:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
receiving a first user input instantiating a reminder to perform a user activity, wherein the user activity is to be performed by a user of the electronic device with respect to a first object; and
in response to receiving the first user input instantiating the reminder to perform the user activity:
obtaining a data structure representing a set of positions of a respective set of objects, wherein the data structure includes at least a first position of the first object;
receiving first location information indicating a current location of the user of the electronic device; determining a gaze direction of the user of the electronic device with respect to the current location of the user of the electronic device;
based on the current location of the user of the electronic device and the gaze direction of the user of the electronic device with respect to the current location of the user of the electronic device, determining whether the first position of the first object, which is included in the data structure, is visible to the user of the electronic device; and
in accordance with a determination that the first position of the first object is visible to the user of the electronic device, providing the reminder to the user.
Per Prong One of Step 2A, the identified recitation of an abstract idea falls within at least one of the Abstract Idea Groupings consisting of: Mathematical Concepts, Mental Processes, or Certain Methods of Organizing Human Activity. Particularly, the identified recitation falls within Mental Processes, concepts performed in the human mind including observations, evaluation, judgement and opinion and Certain Methods of Organizing Human Activity such as managing personal behavior or relationships or interactions between people including social activities, teaching, and following rules of instructions. Per Prong Two of Step 2A, this judicial exception is not integrated into a practical application because the claim as a whole does not integrate the identified abstract idea into a practical application. The electronic device comprising one or more processors and a memory is recited at a high level of generality, i.e., as a generic computing and processing system. This electronic device comprising one or more processors and a memory is no more than mere instructions to apply the exception using a generic computing/electronic devices each comprising at least a processor and memory. Further, processor configured to cause receiving/determining/transmitting data is mere instruction to apply an exception using a generic computer component which cannot integrate a judicial exception into a practical application. Accordingly, this/these additional element(s) does/do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Thus, since the claims are directed to the determined judicial exception in view of the two prongs of Step 2A, MPEP 2106.05 Eligibility Step 2B: Whether a Claim Amounts to Significantly More [R-07.2022]is directed to Step 2B. Therein, the additional elements and combinations therewith are examined in the claims to determine whether the claims as a whole amounts to significantly more than the judicial exception. It is noted here that the additional elements are to be considered both individually and as an ordered combination. In this case, the claims each at most comprise additional elements of the electronic device comprising one or more processors and a memory. Taken individually, the additional limitations each are generically recited and thus does not add significantly more to the respective limitations. Further, executing all the steps/functions by a user/service subsystem is mere instruction to apply an exception using a generic computer component which cannot provide an inventive concept in Step 2B (or, looking back to Step 2A, cannot integrate a judicial exception into a practical application). For further support, the Applicant’s specification supports the claims being directed to use of a generic electronic device comprising one or more processors and a memory type structure at paragraphs 0188: “Device 600 has bus 612 that operatively couples I/O section 614 with one or more computer processors 616 and memory 618.” And paragraph 0319: “ Method 1100 may be performed using one or more electronic devices (e.g., devices 104, 200, 600, 1002) with one or more processors and memory.” See also paragraph 0045, 0354 and Figure 6B and 7A.
Taken as an ordered combination, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the limitations are directed to limitations referenced in Alice Corp. that are not enough to qualify as significantly more when recited in a claim with an abstract idea include, as a non-limiting or non-exclusive examples: i. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 134 S. Ct. at 2360, 110 USPQ2d at 1984 (see MPEP § 2106.05(f)); ii. Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 134 S. Ct. at 2359-60, 110 USPQ2d at 1984 (see MPEP § 2106.05(d)); iii. Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)); or v. Generally linking the use of the judicial exception to a particular technological environment or field of use, e.g., a claim describing how the abstract idea of hedging could be used in the commodities and energy markets, as discussed in Bilski v. Kappos, 561 U.S. 593, 595, 95 USPQ2d 1001, 1010 (2010) or a claim limiting the use of a mathematical formula to the petrochemical and oil-refining fields, as discussed in Parker v. Flook. The courts have recognized the following computer functions inter alia to be well-understood, routine, and conventional functions when they are claimed in a merely generic manner: performing repetitive calculations; receiving, processing, and storing data (e.g., the present claims); electronically scanning or extracting data; electronic recordkeeping; automating mental tasks (e.g., process/machine for performing the present claims); and receiving or transmitting data (e.g., the present claims). The dependent claims 2-13 and 16 do not cure the above stated deficiencies, and in particular, the dependent claims further narrow the abstract idea without reciting additional elements that integrate the exception into a practical application of the exception or providing significantly more than the abstract idea. Claim 2 further limit the abstract idea that the first user input includes a natural-language speech input (a more detailed abstract idea remains an abstract idea). Claim 3 further limit the abstract idea by generating at least part of the data structure representing the set of positions of the respective set of objects (a more detailed abstract idea remains an abstract idea). Claim 4 further limit the abstract idea by generating at least part of the data structure representing the set of positions of the respective set of objects includes: receiving first view information including at least one object of the set of objects; determining a second location associated with the first view information; determining, based on the first view information and the second location, a position of the at least one object (a more detailed abstract idea remains an abstract idea). Claim 5 further limit the abstract idea that the first view information includes data received from an image sensor, and wherein determining the position of the at least one object includes performing image recognition on the data received from the image sensor (a more detailed abstract idea remains an abstract idea). Claim 6 further limit the abstract idea by generating at least part of the data structure representing the set of positions of the respective set of objects includes receiving a user input identifying the at least one object (a more detailed abstract idea remains an abstract idea). Claim 7 further limit the abstract idea that the first view information represents a field-of-view of one user of a set of users, wherein the set of users includes the user of the electronic device (a more detailed abstract idea remains an abstract idea). Claim 8 further limit the abstract idea that the first location information includes global positioning system (GPS) data (a more detailed abstract idea remains an abstract idea). Claim 9 further limit the abstract idea that the first location information includes connection data for the electronic device. (a more detailed abstract idea remains an abstract idea). Claim 10 further limit the abstract idea that in accordance with a determination that the first position of the first object is not visible to the user of the electronic device, providing a directional indication of a first position of the first object to the user (a more detailed abstract idea remains an abstract idea). Claim 11 further limit the abstract idea by providing the directional indication of the first position of the first object includes displaying a visual indication (a more detailed abstract idea remains an abstract idea). Claim 12 further limit the abstract idea by providing the directional indication of the first position of the first object includes producing an audio indication (a more detailed abstract idea remains an abstract idea). Claim 13 further limit the abstract idea that in accordance with a determination that the first position of the first object is not visible to the user of the electronic device and prior to providing the reminder to the user: receiving second location information indicating a second location of the user of the electronic device determining a second gaze direction of the user of the electronic device with respect to the second location of the user of the electronic device; and based on the second location and the second gaze direction of the user of the electronic device with respect to the second location of the user of the electronic device, determining whether the first position of the first object, which is included in the data structure, is visible to the user of the electronic device (a more detailed abstract idea remains an abstract idea). And claim 16 further limit the abstract idea that wherein determining, based on the first location information and the gaze direction, whether the first position of the first object is visible to the user of the electronic device includes: estimating, based on the current location and the gaze direction of the user of the electronic device with respect to the current location of the user of the electronic device, a range of visible positions within the current location; and determining whether the first position of the first object is within the range of visible positions (a more detailed abstract idea remains an abstract idea).The identified recitation falls within Mental Processes, concepts performed in the human mind including observations, evaluation, judgement and opinion and Certain Methods of Organizing Human Activity such as managing personal behavior or relationships or interactions between people including social activities, teaching, and following rules of instructions. Since there are no elements or ordered combination of elements that amount to significantly more than the judicial exception, the claims are not eligible subject matter under 35 USC §101. Thus, viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Response to Arguments
Applicant's arguments filed on January 9, 2026 have been fully considered but they are not persuasive.
With regard to the 35 U.S.C. 101 rejection. Applicant argues that (1) “claim 1 is not directed to certain methods of organizing human activity, as alleged by the Examiner, but is rather directed to a non-abstract improvements to computer technology using logical structure and processes, thereby integrating any judicial exceptions into a practical application.” (Remarks, pages 7-9).
With regard to the 35 U.S.C. 102 rejection, Applicant’s arguments, see pages 10-12, filed on 1/9/2026 with respect to the rejection(s) of claim(s) 1-9 and 14-16 under 35 U.S.C. 102(a)(1) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Karakotsios et al., (US 9,317,113 B1).
In response to Applicant’s arguments (1). Examiner respectfully disagrees. Claim 1 recites an electronic device that receive an input from a user of a reminder to perform a user activity with respect to a first object, obtain data structure representing a set of positions of a respective set of objects including the first position of the first object, receives current location of the user, determine gaze direction with the current location of the user and based on the current location of the user and the gaze direction, determine whether the position of the first object is visible to the user, and if the first object is visible, the reminder is provided to the user as described in the Applicant's disclosure in paragraph 0031 "may provide a reminder to a user based on visual context. After instantiating a reminder to a user to perform a user activity with respect to a first object, when the time comes to provide the reminder to the user, a determination is made whether the first object is visible to the user (e.g., whether a current field-of-view of the user includes the first object). If the first object is not visible to the user, a directional indication may be provided to the user to direct the user to where the first object is visible. Once the first object is visible to the user, the reminder to perform the user activity is provided to the user." Therefore, claim 1 recites an abstract idea falling within the Guidance's subject-matter grouping to the group of Mental Processes, concepts performed in the human mind including observations (reminder to perform the user activity with respect to a first object, set of positions of a respective set of objects including the first position of the first object, user position), evaluation (user position, user gaze direction, first object position), judgement (first object is visible) and opinion (reminder is provided to the user) and Certain Methods of Organizing Human Activity such as managing personal behavior or relationships or interactions between people including social activities, teaching, and following rules of instructions i.e., reminders to perform a user activity when the first object is visible. The same rationale applies to claims 14 and 15.
In addition, per Prong Two of Step 2A, this judicial exception is not integrated into a practical application because the claim as a whole does not integrate the identified abstract idea into a practical application. The electronic device comprising one or more processors and a memory is recited at a high level of generality, i.e., as a generic processor performing a generic computer function of receiving/determining/transmitting data. This generic processor limitation is no more than mere instructions to apply the exception using a generic computer component. Using a computer as a tool to “apply” the recited judicial exceptions (see MPEP § 2106.05 (f)).Considering the claims as a whole, these additional limitations merely add generic computer activities i.e., receiving (user reminder to perform a user activity to a first object, a data structure with set of positions of a respective set of objects including the first position of the first object, user location)/determining/analyzing/(gaze direction based on the user location, visibility of the first object) transmitting (first object is visible: provide reminder) i.e., generic functions of a processor. The recited electronic device comprising one or more processors and a memory, merely links the abstract idea to a computer environment. In this way, the electronic device comprising one or more processors and a memory involvement is merely a field of use which only contributes nominally and insignificantly to the recited method, which indicates absence of integration. Claim 1 uses the electronic device comprising one or more processors and a memory as a tool, in its ordinary capacity, to carry out the abstract idea. As to this level of computer involvement, mere automation of manual processes using generic computers does not necessarily indicate a patent-eligible improvement in computer technology. Considered as a whole, the claimed method does not improve the functioning of the computer itself or any other technology or technical field of providing reminders. Further, a processor configured to cause receiving/determining/transmitting data to a device is mere instruction to apply an exception using a generic computer component which cannot integrate a judicial exception into a practical application. Merely adding generic computer components to perform the method is not sufficient. Thus, the claim must include more than mere instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology. See MPEP § 2106.05(f) for more information about mere instructions to apply an exception. As per MPEP 2106.05 (a) II. Improvements to any other technology or technical field please see the examples that the courts have indicated may not be sufficient to shown an improvement to technology such as gathering and analyzing information using conventional techniques and displaying the result, TLI Communications, 823 F.3d at 612-13, 118 USPQ2d at 1747-48. The same rationale applies to claims 14-15. Accordingly, this/these additional element(s) does/do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. With regard that the claims are similar to Ex parte Desjardins, Examiner respectfully disagrees. Examiner has carefully reviewed the specifications and the claims and is unable to find some technical way that there is an improvement in the functioning of the computer. Accordingly, at this time, all we have is a “bare assertion” of an improvement – with no details on how the reminder is provided based on determination that the first object is visible nor how the objects’ positions are compared/analyzed or captured in the data structure or the how the gaze is determined, that provides technical benefits over existing systems. Accordingly, at this time, this is viewed as MPEP 2106.04(d)(1) “Conversely, if the specification explicitly sets forth an improvement only in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine that the claim improves technology or a technical field.” The rejection is maintained.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-9 and 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Poulos et al., (US 2014/0160157 A1) hereinafter “Poulos” in view of Karakotsios et al., (US 9,317,113 B1) hereinafter “Karakotsios”
Claim 1:
Poulos as shown discloses an electronic device, the electronic device comprising:
one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for (Figure 1);
receiving a first user input instantiating a reminder to perform a user activity, (¶ 0086: “The one or more reminders may be determined based on tasks entered into or accessible from a personal information manager, task manager, email application, calendar application, social networking application, online database application, software bug tracking application, issue tracking application, and/or time management application. […] Each of the one or more reminders may correspond with a particular task to be completed, one or more people associated with the particular task, a location associated with the particular task, a reminder frequency (e.g., that a particular reminder is issued every two weeks), and/or a completion time for the particular task.”);
wherein the user activity is to be performed by a user of the electronic device with respect to a first object (¶ 0082: “an HMD may be used to generate and display an augmented reality environment to an end user of the HMD in which reminders associated with a particular person may be displayed if the particular person is within a field of view of the HMD or if the particular person is within a particular distance of the HMD.” See also Figure 4A, and ¶ 0083: both users, Tim and Joe receive reminders to perform their respective tasks, Joe (i.e., user) “reminders 25 include a first reminder corresponding with the first end user (“Joe”) to “Talk to Tim about Sue's birthday” Tim (i.e., first object); Tim (i.e., user) “ reminders 24 includes a third reminder to “Remember to pay Joe $20” Joe (i.e., first object));
and in response to receiving the first user input instantiating the reminder to perform the user activity: obtaining a data structure representing a set of positions of a respective set of objects, wherein the data structure includes at least a first position of the first object (¶ 0076: “To assist in the detection and/or tracking of objects, image and audio processing engine 194 may utilize structure data 198 and object and gesture recognition engine 190.” ¶ 0078: “The image and audio processing engine 194 may utilize structural data 198 while performing object recognition. Structure data 198 may include structural information about targets and/or objects to be tracked. For example, a skeletal model of a human may be stored to help recognize body parts. In another example, structure data 198 may include structural information regarding one or more inanimate objects in order to help recognize the one or more inanimate objects.);
Poulos teaches in ¶ 0023: “if the particular person is within a particular distance of the HMD (e.g., determined using GPS location information corresponding with a second mobile device associated with the particular person).” See also figure 6B and ¶ 0101: “The location information may comprise GPS coordinates associated with a mobile device used by the particular person.” ¶ 0043: “HMD 200 may perform gaze detection for each eye of an end user's eyes using gaze detection elements” and Figure 7, step 704 “Detect a second person different from the first person within a field of view of the first mobile device”, ¶ 0107: “a second person different from the first person is detected within a field of view of the first mobile device. The first mobile device may comprise an HMD. The second person may be detected within the field of view of the first mobile device by applying object recognition and/or facial recognition techniques to images captured by the HMD.” And ¶ 0101 “the particular person within the environment is identified based on the one or more images and the location information.” Poulos is silent with regard to the following limitations. However Karakotsios in an analogous art of content delivery management for the purpose of providing the following limitations as shown does:
receiving first location information indicating a current location of the user of the electronic device (col. 10, lines 45-47: “the device might include a GPS or have access to cellular triangulation information that can be used to determine a present location of the user. “ Col. 10, lines 62-67 to col. 11, lines 1-6: “if the user is outside and is facing a building or monument, the device can use information such as the coordinates of the device and the orientation of the device (as may be determined using a GPS and an electronic compass, respectively) to determine a direction the user is facing from a current location, which can be compared with map information to determine that the user is facing that building or monument. Similarly, if the user is in the user's home and the device has built a 31) model of the user's home, the device can potentially use location and direction information to determine the object in which the user is likely interested.”);
determining a gaze direction of the user of the electronic device with respect to the current location of the user of the electronic device Figures 2 and 4, figure 4, note reference character 402 “Determine gaze direction of user” see also col. 3, lines 41-67, col. 4, lines 21-38, col. 5, lines 37-59);
based on the current location of the user of the electronic device and the gaze direction of the user of the electronic device with respect to the current location of the user of the electronic device, determining whether the first position of the first object, which is included in the data structure, is visible to the user of the electronic device; and (col. 7, lines 47-57:Based at least in part upon the determination of object location and gaze direction, an object can be identified 406 that corresponds to the user's gaze direction. As discussed, this can involve vector addition or other such geometric calculations. Once the object of interest has been located, one or more object recognition processes can be executed to attempt to recognize the object 408, such as to determine a type of the object, determination of the specific instance of the object, and the like. Once the object, or type of object, is recognized, a determination can be made 410 as to whether there is information available for the object,”);
Both Poulos and Karakotsios teach content management delivery. Poulos teaches in the Abstract “generating and displaying people-triggered holographic reminders […] reminders associated with a particular person may be displayed if the particular person is within a field of view of the HMD or if the particular person is within a particular distance of the HMD.” Karakotsios teaches in the Abstract “Upon recognizing the object, the user can be provided with information about the object, which in some cases can depend at least in part upon a current context or location of the object.” Thus, they are deemed to be analogous references as they are reasonably pertinent to each other and are directed towards solving similar problems within the same environment. One of ordinary skill in the art would have recognized that applying the known technique of Karakotsios would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Karakotsios to the teaching of Poulos would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such as determining a gaze direction of the user of the electronic device with respect to the current location of the user of the electronic device; based on the current location of the user of the electronic device and the gaze direction of the user of the electronic device with respect to the current location of the user of the electronic device, determining whether the first position of the first object, which is included in the data structure, is visible to the user of the electronic device into similar systems. Further, as noted by Karakotsios “a user might be able to indicate or train the device to provide certain information or inputs for certain types of objects. For example, a user could manually specify that the device should provide information about the user's upcoming appointments when the user gazes at a clock for a period of time. Alternatively, the user might always ask for schedule information after gazing at a clock, and the device can learn to provide that type of information in response to the user gazing at that type of object.” (Karakotsios, col. 11, lines 37-45).
In addition, Poulos teaches:
in accordance with a determination that the first position of the first object is visible to the user of the electronic device, providing the reminder to the user (Figure 4A and ¶ 0083: both users, Tim and Joe receive reminders to perform their respective tasks, Joe (i.e., user) “reminders 25 include a first reminder corresponding with the first end user (“Joe”) to “Talk to Tim about Sue's birthday” Tim (i.e., first object); Tim (i.e., user) “reminders 24 includes a third reminder to “Remember to pay Joe $20” Joe (i.e., first object));
Claims 14 and 15:
The limitations of claims 14 and 15 (¶ 0040: “Processing unit 236 may include one or more processors and a memory for storing computer readable instructions to be executed on the one or more processors. The memory may also store other types of data to be executed on the one or more processors.”) encompass substantially the same scope as claim 1. Accordingly, those similar limitations are rejected in substantially the same manner as claim 1, as described above.
Claim 2:
Poulos as shown discloses the following limitations:
wherein the first user input includes a natural- language speech input (¶ 0087: “The end user of the HMD may also enter one or more reminders into a personal information management application running on the HMD using voice commands and/or gestures. For example, the end user of the HMD may issue a voice command such as “remind me about the concert when I see my parents.” And ¶ 0077: “Processing unit 191 may include one or more processors for executing object, facial, and voice recognition algorithms.” And ¶ 0037: “applying speech recognition techniques (e.g., to identify key words, phrases, or names)”);
Claim 3:
Poulos as shown discloses the following limitations:
the one or more programs further including instructions for: generating at least part of the data structure representing the set of positions of the respective set of objects (¶ 0076: “To assist in the detection and/or tracking of objects, image and audio processing engine 194 may utilize structure data 198 and object and gesture recognition engine 190.” ¶ 0023: “if the particular person is within a particular distance of the HMD (e.g., determined using GPS location information corresponding with a second mobile device associated with the particular person)” see also Figure 5 and ¶ 0078 and ¶0088-0089);
Claim 4:
Poulos as shown discloses the following limitations:
wherein generating at least part of the data structure representing the set of positions of the respective set of objects includes: receiving first view information including at least one object of the set of objects; determining a second location associated with the first view information; and determining, based on the first view information and the second location, a position of the at least one object (¶ 0023: “if the particular person is within a particular distance of the HMD (e.g., determined using GPS location information corresponding with a second mobile device associated with the particular person).” See also figure 6B and ¶ 0101: “The location information may comprise GPS coordinates associated with a mobile device used by the particular person.” See also ¶ 0076 and 0078);
Claim 5:
Poulos as shown discloses the following limitations:
wherein the first view information includes data received from an image sensor, and wherein determining the position of the at least one object includes performing image recognition on the data received from the image sensor ((Figure 7, step 704 “Detect a second person different from the first person within a field of view of the first mobile device”, ¶ 0107: “a second person different from the first person is detected within a field of view of the first mobile device. The first mobile device may comprise an HMD. The second person may be detected within the field of view of the first mobile device by applying object recognition and/or facial recognition techniques to images captured by the HMD.” And ¶ 0101 “the particular person within the environment is identified based on the one or more images and the location information.”);
Claim 6:
Poulos as shown discloses the following limitations:
wherein generating at least part of the data structure representing the set of positions of the respective set of objects includes receiving a user input identifying the at least one object (¶ 0025: “a completion of a reminder may be automatically detected by applying speech recognition techniques (e.g., to identify key words, phrases, or names) to captured audio of a conversation occurring between the end user and the particular person.”);
Claim 7:
Poulos as shown discloses the following limitations:
wherein the first view information represents a field-of-view of one user of a set of users, wherein the set of users includes the user of the electronic device (Figures 4A-4B, see also ¶ 0083-0084: “FIG. 4A depicts one embodiment of an environment 400 in which a first end user (i.e., “Joe”) wearing an HMD 29 views an augmented reality environment that includes reminders 25 associated with both the first end user and a second end user (i.e., “Tim”) wearing a second HMD 28 within the environment 400.”);
Claim 8:
Poulos as shown discloses the following limitations:
wherein the first location information includes connection data for the electronic device (¶ 0101: “The location information may comprise GPS coordinates associated with a mobile device used by the particular person.”);
Claim 9:
Poulos as shown discloses the following limitations:
wherein the first location information includes global positioning system (GPS) data (¶ 0092: “the second set of the one or more reminders may be transmitted to the second mobile device via a wireless connection (e.g., a WiFi connection).” And ¶ 0118: “RF transmitter/receiver 8308 may enable wireless communication via various wireless technology standards such as Bluetooth® or the IEEE 802.11 standards.”);
Claim 16:
Poulos teaches in ¶ 0036: “a mobile device, such as mobile device 19, may be in communication with a server in the cloud, such as server 15, and may provide to the server location information (e.g., the location of the mobile device via GPS coordinates) and/or image information (e.g., information regarding objects detected within a field of view of the mobile device” the following paragraphs describe the gaze direction of objects detected within a field of view of the HMD i.e., mobile device 19, ¶ 0043: “HMD 200 may perform gaze detection for each eye of an end user's eyes using gaze detection elements and a three-dimensional coordinate system in relation to one or more human eye elements such as a cornea center, a center of eyeball rotation, or a pupil center. Gaze detection may be used to identify where the end user is focusing within a field of view.” And ¶ 0048: “FIG. 2D depicts one embodiment of a portion of an HMD 2 in which gaze vectors extending to a point of gaze are used for aligning a near inter-pupillary distance (IPD). HMD 2 is one example of a mobile device, such as mobile device 19 in FIG. 1. […] The intersection of the gaze vectors 180 l and 180 r indicates that the end user is looking at real object 194.” Figure 2B describes the HMD i.e., electronic device, figure 2C and ¶ 0045 illustrates “Through the display optical systems 14 l and 14 r in the eyeglass frame 115, the end user's field of view includes both real objects 190, 192, and 194 and virtual objects 182 and 184.” And figure 2D and ¶ 0048: “The intersection of the gaze vectors 180 l and 180 r indicates that the end user is looking at real object 194.” And ¶ 0082 with regard to figures 4A-4B: “an HMD may be used to generate and display an augmented reality environment to an end user of the HMD in which reminders associated with a particular person” i.e., first object” may be displayed if the particular person is within a field of view of the HMD Poulos as explained above teaches a range of visible positions. Poulos is silent with regard to the following limitations. However Karakotsios in an analogous art of content delivery management for the purpose of providing the following limitations as shown does:
wherein determining, based on the first location information and the gaze direction, whether the first position of the first object is visible to the user of the electronic device includes: estimating, based on the current location of the user of the electronic device and the gaze direction of the user of the electronic device with respect to the current location of the user of the electronic device, a range of visible positions within the current location; and determining whether the first position of the first object is within the range of visible positions (Figure 2, note the objects 206 and 216 and col. lines “In this example, both objects are within the field of view of the camera 208, such that the device can determine the relative direction to each object.”);
Both Poulos and Karakotsios teach content management delivery. Poulos teaches in the Abstract “generating and displaying people-triggered holographic reminders […] reminders associated with a particular person may be displayed if the particular person is within a field of view of the HMD or if the particular person is within a particular distance of the HMD.” Karakotsios teaches in the Abstract “Upon recognizing the object, the user can be provided with information about the object, which in some cases can depend at least in part upon a current context or location of the object.” Thus, they are deemed to be analogous references as they are reasonably pertinent to each other and are directed towards solving similar problems within the same environment. One of ordinary skill in the art would have recognized that applying the known technique of Karakotsios would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Karakotsios the teaching of Poulos would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such as wherein determining, based on the first location information and the gaze direction, whether the first position of the first object is visible to the user of the electronic device includes: estimating, based on the first current location of the user of the electronic device information and the gaze direction of the user of the electronic device with respect to the current location of the user of the electronic device, a range of visible positions within the current location; and determining whether the first position of the first object is within the range of visible positions into similar systems. Further, as noted by Karakotsios “a user might be able to indicate or train the device to provide certain information or inputs for certain types of objects. For example, a user could manually specify that the device should provide information about the user's upcoming appointments when the user gazes at a clock for a period of time. Alternatively, the user might always ask for schedule information after gazing at a clock, and the device can learn to provide that type of information in response to the user gazing at that type of object.” (Karakotsios, col. 11, lines 37-45).
Claims 10-13 are rejected under 35 U.S.C. 103 as being unpatentable over Poulos et al., (US 2014/0160157 A1) hereinafter “Poulos” and Karakotsios et al., (US 9,317,113 B1) hereinafter “Karakotsios as applied to claim 1 above, and further in view of Salter et al., (US 9,501,873 B2), hereinafter “Salter”.
Claim 10:
Poulos in view of Karakotsios is silent with regard to the following limitations. However Salter in an analogous art of augmented reality environment for the purpose of providing the following limitations as shown does:
the one or more programs further including instructions for: in accordance with a determination that the first position of the first object is not visible to the user of the electronic device, providing a directional indication of a first position of the first object to the user (Figure 9, reference character 902 “Identify objects outside of field of view” and reference character 904 “Indicate positional information associated with objects outside field of view”. See also col. 9, lines 22-58);
Both Poulos and Salter teach augmented reality environment. Poulos teaches in ¶ 0023 “an HMD may provide an augmented reality environment to an end user of the HMD in which reminders associated with a particular person may be displayed if the particular person is within a field of view of the HMD (e.g., determined using facial recognition techniques) or if the particular person is within a particular distance of the HMD (e.g., determined using GPS location information corresponding with a second mobile device associated with the particular person).” Salter teaches in the Abstract “operating a user interface on an augmented reality computing device comprising a see-through display system.” Thus, they are deemed to be analogous references as they are reasonably pertinent to each other and are directed towards solving similar problems within the same environment. One of ordinary skill in the art would have recognized that applying the known technique of Salter would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Salter to the teaching of Poulos in view of Karakotsios would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such as in accordance with a determination that the first position of the first object is not visible to the user of the electronic device, providing a directional indication of a first position of the first object to the user into similar systems. Further, as noted by Salter “a visible asset located behind the user and slightly to the right may be indicated by a marker displayed on the right hand periphery of field of view 102. In other embodiments, visual indicators may be positioned in any other suitable manner.” Salter, col. 6, lines 4-6).
Claim 11:
Poulos in view of Karakotsios is silent with regard to the following limitations. However Salter in an analogous art of augmented reality environment for the purpose of providing the following limitations as shown does:
wherein providing the directional indication of the first position of the first object includes displaying a visual indication (Figure 9, reference character 906 ”Display indicators within the field of view” which includes markers, tendrils, virtual map. See also col. 9, lines 22-67 to col. 10, lines 1-3 and col. 7, lines 12-21: “The visual indicator 604 may comprise a route, a path, an arrow, or other indication of directionality which a user may follow in order to be directed to the object it represents. In the depicted embodiment, visual indicator 604 may be represented as a tendril (e.g. a vine-like feature or other representation of a line) extending from object 606, which may represent a notification block or other virtual object. Thus, a user may visually follow indicator 604 towards object 606 by turning towards the right to bring the object 606 into view.”);
Both Poulos and Salter teach augmented reality environment. Poulos teaches in ¶ 0023 “an HMD may provide an augmented reality environment to an end user of the HMD in which reminders associated with a particular person may be displayed if the particular person is within a field of view of the HMD (e.g., determined using facial recognition techniques) or if the particular person is within a particular distance of the HMD (e.g., determined using GPS location information corresponding with a second mobile device associated with the particular person).” Salter teaches in the Abstract “operating a user interface on an augmented reality computing device comprising a see-through display system.” Thus, they are deemed to be analogous references as they are reasonably pertinent to each other and are directed towards solving similar problems within the same environment. One of ordinary skill in the art would have recognized that applying the known technique of Salter would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Salter to the teaching of Poulos in view of Karakotsios would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such as providing the directional indication of the first position of the first object includes displaying a visual indication into similar systems. Further, as noted by Salter “a visible asset located behind the user and slightly to the right may be indicated by a marker displayed on the right hand periphery of field of view 102. In other embodiments, visual indicators may be positioned in any other suitable manner.” Salter, col. 6, lines 4-6).
Claim 12:
Poulos in view of Karakotsios is silent with regard to the following limitations. However Salter in an analogous art of augmented reality environment for the purpose of providing the following limitations as shown does:
wherein providing the directional indication of the first position of the first object includes producing an audio indication (Figure 9, reference character 916 “Emit sounds indicating positional information”. See also col. 10, lines 21-44: “ the indication of the objects out of the user's view also may comprise audio indications, instead of or in addition to visual indicators. Thus, at 916, method 900 may include emitting one or more sounds from speakers indicating a presence of an object or objects. The audio indications may take any suitable form. For example, the indications may take the form of chimes/bells, beeps, other tones, as well as more complex outputs, such as computerized speech outputs”);
Both Poulos and Salter teach augmented reality environment. Poulos teaches in ¶ 0023 “an HMD may provide an augmented reality environment to an end user of the HMD in which reminders associated with a particular person may be displayed if the particular person is within a field of view of the HMD (e.g., determined using facial recognition techniques) or if the particular person is within a particular distance of the HMD (e.g., determined using GPS location information corresponding with a second mobile device associated with the particular person).” Salter teaches in the Abstract “operating a user interface on an augmented reality computing device comprising a see-through display system.” Thus, they are deemed to be analogous references as they are reasonably pertinent to each other and are directed towards solving similar problems within the same environment. One of ordinary skill in the art would have recognized that applying the known technique of Salter would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Salter to the teaching of Poulos in view of Karakotsios would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such as wherein providing the directional indication of the first position of the first object includes producing an audio indication into similar systems. Further, as noted by Salter “a visible asset located behind the user and slightly to the right may be indicated by a marker displayed on the right hand periphery of field of view 102. In other embodiments, visual indicators may be positioned in any other suitable manner.” Salter, col. 6, lines 4-6).
Claim 13:
Poulos as explained above provide the reminder to the user as shown in Figures 4A-4B and Figure 7, step 704 “Detect a second person different from the first person within a field of view of the first mobile device”, ¶ 0107: “a second person different from the first person is detected within a field of view of the first mobile device. The first mobile device may comprise an HMD. The second person may be detected within the field of view of the first mobile device by applying object recognition and/or facial recognition techniques to images captured by the HMD.” And ¶ 0101 “the particular person within the environment is identified based on the one or more images and the location information.” Karakotsios as explained above in claim 1 teaches the use of GPS for the location of the user, Karakotsios also determines the gaze direction and recognizes the object as shown in Figure 4. Poulos in view of Karakotsios is silent with regard to the following limitations. However Salter in an analogous art of augmented reality environment for the purpose of providing the following limitations as shown does:
the one or more programs further including instructions for: in accordance with a determination that the first position of the first object is not visible to the user of the electronic device and prior to providing the reminder to the user: (col. 4, lines 44-67 to col. 5, lines 1-8: “FIG. 4 shows an example field of view 102 at a first time T1 and at a second, later time T2. At time T1, an object 402 and an object 404 are within the field of view 102. […] At time T2, the user has shifted their field of view toward the right so that object 416 associated with marker 412 comes into the user's field of view 102.” Note that object 416 is not visible to the user until T2);
receiving second location information indicating a second location of the user of the electronic device (col. 3, lines 54-57: “Display system 300 may further comprise additional sensors. For example, display system 300 may comprise a global positioning (GPS) subsystem 316 to allow a location of the display system 300 to be determined. This may help to identify objects, such as buildings, etc., that are located in the user's Surrounding physical environment.”);
Both Poulos and Salter teach augmented reality environment. Poulos teaches in ¶ 0023 “an HMD may provide an augmented reality environment to an end user of the HMD in which reminders associated with a particular person may be displayed if the particular person is within a field of view of the HMD (e.g., determined using facial recognition techniques) or if the particular person is within a particular distance of the HMD (e.g., determined using GPS location information corresponding with a second mobile device associated with the particular person).” Salter teaches in the Abstract “operating a user interface on an augmented reality computing device comprising a see-through display system.” Thus, they are deemed to be analogous references as they are reasonably pertinent to each other and are directed towards solving similar problems within the same environment. One of ordinary skill in the art would have recognized that applying the known technique of Salter would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Salter to the teaching of Poulos in view of Karakotsios would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such as in accordance with a determination that the first position of the first object is not visible to the user of the electronic device and prior to providing the reminder to the user: receiving second location information indicating a second location of the user of the electronic device into similar systems. Further, as noted by Salter “a visible asset located behind the user and slightly to the right may be indicated by a marker displayed on the right hand periphery of field of view 102. In other embodiments, visual indicators may be positioned in any other suitable manner.” Salter, col. 6, lines 4-6).
Poulos is silent with regard to the following limitations. However Karakotsios in an analogous art of content delivery management for the purpose of providing the following limitations as shown does:
determining a second gaze direction of the user of the electronic device with respect to the second location of the user of the electronic device; and (col. 10, lines 45-47: “the device might include a GPS or have access to cellular triangulation information that can be used to determine a present location of the user. “ Col. 10, lines 62-67 to col. 11, lines 1-6: “if the user is outside and is facing a building or monument, the device can use information such as the coordinates of the device and the orientation of the device (as may be determined using a GPS and an electronic compass, respectively) to determine a direction the user is facing from a current location, which can be compared with map information to determine that the user is facing that building or monument. Similarly, if the user is in the user's home and the device has built a 31) model of the user's home, the device can potentially use location and direction information to determine the object in which the user is likely interested.” and Figures 2 and 4, figure 4, note reference character 402 “Determine gaze direction of user” see also col. 3, lines 41-67, col. 4, lines 21-38, col. 5, lines 37-59);
based on the second location and the second gaze direction of the user of the electronic device with respect to the second location of the user of the electronic device, determining whether the first position of the first object, which is included in the data structure, is visible to the user of the electronic device (col. 7, lines 47-57:Based at least in part upon the determination of object location and gaze direction, an object can be identified 406 that corresponds to the user's gaze direction. As discussed, this can involve vector addition or other such geometric calculations. Once the object of interest has been located, one or more object recognition processes can be executed to attempt to recognize the object 408, such as to determine a type of the object, determination of the specific instance of the object, and the like. Once the object, or type of object, is recognized, a determination can be made 410 as to whether there is information available for the object.”);
Both Poulos and Karakotsios teach content management delivery. Poulos teaches in the Abstract “generating and displaying people-triggered holographic reminders […] reminders associated with a particular person may be displayed if the particular person is within a field of view of the HMD or if the particular person is within a particular distance of the HMD.” Karakotsios teaches in the Abstract “Upon recognizing the object, the user can be provided with information about the object, which in some cases can depend at least in part upon a current context or location of the object.” Thus, they are deemed to be analogous references as they are reasonably pertinent to each other and are directed towards solving similar problems within the same environment. One of ordinary skill in the art would have recognized that applying the known technique of Karakotsios would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Karakotsios the teaching of Poulos would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such as determining a second gaze direction of the user of the electronic device with respect to the second location of the user of the electronic device; based on the second location of the user of the electronic device and the second gaze direction of the user of the electronic device with respect to the second location of the user of the electronic device, determining whether the first position of the first object, which is included in the data structure, is visible to the user of the electronic device into similar systems. Further, as noted by Karakotsios “a user might be able to indicate or train the device to provide certain information or inputs for certain types of objects. For example, a user could manually specify that the device should provide information about the user's upcoming appointments when the user gazes at a clock for a period of time. Alternatively, the user might always ask for schedule information after gazing at a clock, and the device can learn to provide that type of information in response to the user gazing at that type of object.” (Karakotsios, col. 11, lines 37-45).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NADJA CHONG whose telephone number is (571)270-3939. The examiner can normally be reached on Monday-Friday 8:00 am - 2:00 pm ET, Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, RUTAO WU can be reached on 571.272.6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NADJA N CHONG CRUZ/
Primary Examiner, Art Unit 3623