DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/18/2025 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Singh, U.S. Patent Number 11,024,091 B2,
McHugh et al., U.S. Patent Number 11,164,380 B2, in view of Macbeth et al., U.S. Patent Publication Number 2007/0300185 A1.
Regarding claim 1, Singh discloses a computer-implemented method comprising: determining, from a perspective of a user, an activity for the user and a distance between the user and a location of the activity (col. 12, lines 36-40, AR application; activity is recognized as walking and an associated activity zone is determined to be the path in front of him up to about 3 feet ahead), wherein diameter centered about the location of the activity for the user represent increasing distances between the user and the location of the activity (col. 3, lines 53-57, determining a user’s activity zone may comprise determining the user’s direction of motion as a straight line and determining a zone as a preset view angle centered on the user’s direction of motion); (col. 12, lines 5-9, indicates a periphery of an activity zone; activity zone may be determined to be a distorted (or non-symmetrical) shape (instead of a symmetrical, conical, or circular shape) based on a user’s activity; col. 12, lines 25-26, activity zone is not a contiguous region (or is a plurality of disconnected shapes)); ascertaining information for displaying information relevant to the activity in an augmented reality (AR) interface in accordance with the distance (col. 3, lines 33-37, determines whether the direction of travel is substantially towards the object, selects a location for display of the AR information); displaying information corresponding to the information level in the AR interface (col. 3, lines 34-40, selecting a first location at an offset from the object while the direction of travel is substantially towards the object, a selecting a second location that partially obscures the object while the direction of travel is not substantially towards the object); updating the information level as the distance between the user and the location of the activity changes and the user enters a new one of the circles to obtain a new information level (col. 12, lines 46-55, Johns runs into Mary while walking; his activity is determined to be chatting with Mary, and the associated activity zone changes); and displaying the information corresponding to the new information level in the AR interface, wherein the displaying of the information comprises overlaying the information on and blocking at least some detail of a real-world display in the AR interface (col. 12, lines 55-59, AR information for these object is placed outside John’s activity zone; if Johns turns to read this information; the AR information may change but none of the AR information obstructs his view of Mary; col. 1, lines 23-26, may display annotations and/or animations overlaid on top of a real-world view and block or obscure a user’s ability to see both the real world and the augmentations).
It is noted that Singh discloses determining, from a perspective of a user, an activity for the user and a distance between the user and a location of the activity (i.e., activity is recognized as walking and an associated activity zone is determined to be the path in front of him up to about 3 feet ahead), and further discloses col. 12, lines 5-9, activity zone may be determined to be a distorted (or non-symmetrical) shape (instead of a symmetrical, conical, or circular shape); col. 12, lines 25-26, activity zone is not a contiguous region (or is a plurality of disconnected shapes), but fails to specifically disclose determining contextual information; contextually information wherein concentric circles of increasing diameter centered about the location the user; and contextual information granularity levels.
McHugh discloses determining an event for a user and a distance between the user and a location (col. 10, lines 1-2, event determining unit; col. 45-47, responsive based on a virtual or actual distance between the object and the AR HMD), wherein concentric circles of increasing diameter about the location of the event of represent increasing distances between the user and the location of the event (col. 17, lines 51-53, a plurality of concentric zones of responsive content having different distances from a wearer of AR HMD device); ascertaining an information level for displaying information relevant to the event in an augmented reality (AR) interface in accordance with the distance (col. 15, lines 46-48, can create different virtual content representations based on how close the user is to the associated real world object) by determining one of the concentric circles in which the user is located (col. 18, lines 6-8, Concentric Rings and Response Content Chart); displaying the information corresponding to the information level in the AR interface (figure 12A); updating the information level as the distance between the user and the location of the event changes and the user enters a new one of the concentric circles to obtain a new information granularity level (col. 18, lines 40-45, the user moves about a geographical area; as the distance between the AR HMD device and a real or virtual object changes, the UI associated with the real or virtual object may animate or transition to a different state); and displaying the information corresponding to the new information level in the AR interface (col. 18, lines 44-45, display different content to the user associated with the real or virtual object; may transition from providing less detailed to a more detailed UI as the distance between the object and AR HMD device decreases), wherein the displaying of the information comprises overlaying the information on and blocking at least some detail of a real-world display in the AR interface (figure 12B, blocking some detail of the wall in the real-world display).
It is further noted that Singh in view of McHugh fail to disclose the information displayed in the AR interface as contextual information and granularity.
Macbeth discloses determining context granularity, the computer-implemented method; ascertaining a contextual information granularity level for displaying contextual information relevant to the activity in an interface (paragraph 0009, dynamically changing the user interface of a system level shell; based on a current (for future) activity of the user and other context data; the context data can include extended activity data, information about the user’s state, and information about the environment); displaying the contextual information corresponding to the contextual information granularity level in the interface (paragraph 0060, activity-centric adaptive user interfaces; granular applications/web-services functionality factoring around user activities).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the concentric circles for the level for displaying information as disclosed by McHugh, as the circular, plurality of disconnected shapes as disclosed by Singh, to provide contextual distance responsive interfaces for activities in an augmented reality (AR). It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the information displayed as disclosed by Singh, having changed information, as contextual information with granularity as disclosed by Macbeth, to align information more closely with the intended user as disclosed by Macbeth, for factoring around user activities.
Regarding claim 2, Singh discloses wherein the AR interface is supported by goggles that is wearable by the user (col. 5, lines 14-15 , a headset or AR glasses).
Regarding claim 3, it is noted that Singh and McHugh fails to disclose wherein the ascertaining is executed by a machine-learning algorithm and is based on real-time updateable historic data of the user, the activity and surroundings.
Macbeth 2007/0300185, wherein the ascertaining is executed by a machine-learning algorithm and is based on real-time updateable historic data of the user, the activity and surroundings (paragraph 0097, adaptive UI machine learning and reasoning; learn by monitoring the context, the decisions made and the user feedback; can produce(and/or update) a new set of learned rules).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to further include in the activity as disclosed by Singh, the machine-learning algorithm based on real-time updateable historic data of the user, the activity and surroundings, as further disclosed by Macbeth, to provide machine learning that can infer information on behalf of a user.
Regarding claim 4, Singh discloses activity data at a lowest information level, the information includes no interactive options and no interactive options are displayed (col. 12, lines 40-42, John is able to view his walking path because AR information received from his AR applications is not displayed in his direction of travel), at a relatively low information level, the information comprises a broad description of information relevant to the event and interactive options relevant to the event for interaction with by the user comprising a set of basic interactive options comprising a lighting or a highlighting of the location of the event and alerts as to intervening objects to be avoided (col. 3, lines 64-67, highlighting a real-world object based on where a real-world object lies with respect to an activity zone may comprise displaying subtle highlighting), and at a relatively high information level, the information comprises the broad description of the information relevant to the activity, a narrow description of the activity and additional information about a relationship between the activity and externalities and interactive options relevant to the activity for interaction with by the user comprising a set of basic, intermediate and advanced interactive options (col. 12, lines 56-59, if John turns to read this information or to examine a real-world object in some detail, the AR information may change).
McHugh discloses wherein: at a lowest information level, the information includes no interactive options and no interactive options are displayed (figure 12A; 1202; pinned virtual content is displayed), at a relatively low information level, the information comprises a broad description of information relevant to the event and interactive options relevant to the event for interaction with by the user comprising a set of basic interactive options (figure 12B; 1206 and 1208) comprising a lighting or a highlighting of the location of the event and alerts as to intervening objects to be avoided (col. 24, lines 44-46, may display pinned virtual content with only the meeting room’s name and a color indicating the present availability of the meeting room), and at a relatively high information level, the information comprises the broad description of the information relevant to the event (col. 24, lines 59-63, when the user gets even closer and the distance between the user and the conference room doorway is within the near zone, the virtual content can transition to provide the most details content), a narrow description of the event and additional information about a relationship between the event and externalities and interactive options relevant to the event for interaction with by the user comprising a set of basic, intermediate and advanced interactive options (col. 24, lines 47-67, additional detailed content would have been too small for the user to read or see when The user was farther away and in the far zone; can transition to provide the most details content available).
Macbeth discloses contextual information and granularity (paragraph 0009, dynamically changing the user interface of a system level shell; based on a current (for future) activity of the user and other context data; the context data can include extended activity data, information about the user’s state, and information about the environment; paragraph 0060, activity-centric adaptive user interfaces; granular applications/web-services functionality factoring around user activities).
Regarding claim 5, it is noted that Singh fails to discloses displaying of the information corresponding to the information level comprises displaying a first level of the information and displaying the first level of the interactive options, and the displaying of the information corresponding to the new information level comprises displaying a second level of the information, which differs from the first level, and displaying the second level of the interactive options, which differs from the first level.
McHugh discloses wherein: the displaying of the information corresponding to the information level comprises displaying a first level of the information and displaying the first level of the interactive options (figure 12B; col. 24, lines 47-57, once the user enters the medium zone the virtual content transitions to include more detailed virtual content; application transitions to show more detailed virtual content comprising a meeting room UI that provides a button to book the room for an amount of time since the room is available, a button to schedule a meeting time in the future, and an indication of the room’s schedule for the present day), and the displaying of the information corresponding to the new information level comprises displaying a second level of the information (figure 12C), which differs from the first level, and displaying the second level of the interactive options, which differs from the first level (col. 24, lines 59-67, when the user gets even closer and the distance between the user and conference room doorway is within the near zone, the virtual content can transition to provide the most detailed content about the meeting room; additional information that appears on the virtual meeting room UI can be a statement of availability and the name of the person responsible for the “Design Review” meeting scheduled).
Macbeth discloses contextual information and granularity (paragraph 0009, dynamically changing the user interface of a system level shell; based on a current (for future) activity of the user and other context data; the context data can include extended activity data, information about the user’s state, and information about the environment; paragraph 0060, activity-centric adaptive user interfaces; granular applications/web-services functionality factoring around user activities).
Regarding claim 6, Singh discloses displaying additional information of surroundings of the user and the location in the AR interface (col. 12, lines 28-34, associated AR information is displayed (or placed) near the object along with a marker at the periphery of the activity that indicates some AR information is available; additional AR information may be displayed if the user looks toward current AR information).
McHugh discloses wherein the computer-implemented method further comprises displaying additional information of surroundings of the user and the location in the AR interface (col. 19, lines 27-30, when the virtual object is perceived as close enough to the user to be useful, then the UI is displayed with more detail so the user can interact with the UI; Figure 12C).
Regarding claim 7, it is noted that Singh fails to disclose wherein each instance of the updating is discrete.
McHugh discloses wherein each instance of the updating is discrete (col. 23, lines 16-18, the current HMD device location in the spatial map is also stored and updated periodically or continuously as the user wearing the HMD moves around; current and updated distance between the HMD device and nearby objects in the spatial map may be calculated or determined periodically or on an ongoing basis).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide augmented information as disclosed by Singh, on a periodic or ongoing basis as disclosed by McHugh.
Regarding claims 8-14, they are rejected based upon similar rational as above claims 1-7. Singh further discloses a non-transitory computer program product, the computer program product comprising one or more computer readable storage media having computer readable program code collectively stored on the one or more computer readable storage media, the computer readable program code being executed by a processor of a computer system to cause the computer system to perform a method (col. 15, lines 18-27).
Regarding claims 15-20, they are rejected based upon similar rational as above claims 1, 2, and 5-7. Singh further discloses a computing system (102) comprising: a processor (118, processor); a memory coupled to the processor (130, non-removable memory; 132, removable memory); and one or more computer readable storage media coupled to the processor, the one or more computer readable storage media collectively containing instructions that are executed by the processor (col. 15, lines 51-53).
Response to Arguments
Applicant’s arguments, see pages 9-10, filed 11/03/2025, with respect to the rejection(s) of claim(s) 1-20 under 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Singh, McHugh in view of Macbeth.
Applicant argues the previous combination fails to render obvious “from a perspective of a user” and concentric circles of increasing diameter that are “centered” about the location of the activity. Examiner responds Singh discloses col. 3, lines 53-57, determining a user’s activity zone may comprise determining the user’s direction of motion as a straight line and determining a zone as a preset view angle centered on the user’s direction of motion; col. 12, lines 5-9, indicates a periphery of an activity zone; activity zone may be determined to be a distorted (or non-symmetrical) shape (instead of a symmetrical, conical, or circular shape) based on a user’s activity; col. 12, lines 25-26, activity zone is not a contiguous region (or is a plurality of disconnected shapes), and McHugh discloses concentric circles, therefore the combination would render obvious “from perspective of a user” and concentric circles of increasing diameter that are “centered” about the location of the activity.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Motilewa Good-Johnson whose telephone number is (571)272-7658. The examiner can normally be reached Monday - Friday 6am-2:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at 571-272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
MOTILEWA . GOOD JOHNSON
Primary Examiner
Art Unit 2616
/MOTILEWA GOOD-JOHNSON/Primary Examiner, Art Unit 2619