Prosecution Insights
Last updated: April 19, 2026
Application No. 17/977,129

METHODS AND SYSTEMS FOR PROVIDING A PACE INDICATOR IN AN EXTENDED REALITY ENVIRONMENT

Final Rejection §102§103
Filed
Oct 31, 2022
Examiner
KIYABU, KARIN A
Art Unit
2626
Tech Center
2600 — Communications
Assignee
Adeia Guides Inc.
OA Round
4 (Final)
57%
Grant Probability
Moderate
5-6
OA Rounds
3y 1m
To Grant
97%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
213 granted / 373 resolved
-4.9% vs TC avg
Strong +40% interview lift
Without
With
+39.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
18 currently pending
Career history
391
Total Applications
across all art units

Statute-Specific Performance

§101
2.9%
-37.1% vs TC avg
§103
66.5%
+26.5% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
12.0%
-28.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 373 resolved cases

Office Action

§102 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is in reply to an Amendment filed on December 31, 2025, regarding Application No. 17/977,129. Applicants amended claims 1 and 11, added new claims 41-42, and previously canceled claims 7, 17, and 21-40. Claims 1-6, 8-16, 18-20, and 41-42 are pending. Response to Arguments Applicants’ amendment to claim 11 and remark regarding claim objections (Remarks, p. 7) are acknowledged. In view of the amendment, the objections are moot. Applicants’ arguments filed on December 31, 2025 have been fully considered but they are not persuasive. In response to the arguments regarding newly amended independent claims 1 and 11, Viner, “a second plurality of locations of a second route”, “‘Second Plurality of Locations’ for a Geographically Separated User”, and “‘providing, using the control circuitry, a pace indicator to the first user moving along the first route based on the first route data and the second plurality of locations of the second route data...’ as recited in claim 1, as amended”, rejections, and other cited references and cure (Remarks, pp. 8 and 10), the Office respectfully disagrees and submits that the arguments are not commensurate with the rejections and the recited features are taught by Viner. More specifically, Viner discloses: [0019] FIG. 1 illustrates a snapshot of an example scenario 100 during a first movement session, in accordance with particular embodiments. The scenario 100 shows a user 135 jogging along a route 120. Using an AR application, the user 135 may begin recording (e.g., stored locally on the device running the AR application or a mobile device or stored remotely on a server) a movement session at starting location 110…. The starting location 100 is represented by (x0, y0), which may represent the GPS location measured by the user's mobile device (e.g., longitude and latitude coordinates), and the start time of the movement session is represented by t0. Once the movement session begins, the AR application may track and record the user's movements and the times at which the locations are recorded. In the illustrated snapshot scenario 100, the user 135 may have jogged to location 130, represented by (xi, yi), at time ti. The user 135 may continue to jog until the movement session ends, at which time the recording may cease. The ending location 140, denoted by (xN, yN) and associated with time tN, represents the location at which the movement session ends…. At the end of the movement session, N number of locations and their associated times may be recorded and associated with that movement session. The recordings may include data of the user's movement during the movement session (e.g., as the user 135 jogs along the route 120), such as a plurality of locations from the starting location 110 to the ending location 140 and the time at which each of the plurality of locations is recorded. … [0023] FIG. 2 illustrates a snapshot of an example scenario 200 during a second movement session…. The example shows the user 235 jogging along a route 220, which could be the same route 120 shown in FIG. 1 or a different one…. If the routes are different, particular embodiments may determine the virtual companion’s 255 relative position based on distance measurements. The distance measurements may be… derived from recordings of GPS coordinates. To derive distance measurements from recorded GPS coordinates, the AR application may translate the coordinates into distance measurements by computing the distance traveled between successive GPS locations. (Emphasis added). Also, figures 1-3 and 6 and paragraphs [0006], [0017], [0019], [0022]-[0023], [0025], [0048]-[0050], [0073], and [0075] of Viner teach: providing, using the control circuitry of 600, a pace indicator 255 to the first user 235 moving along the first route 220 based on the first route 220 data and the second plurality of locations N of the second route 120 data, wherein the pace indicator 255 comprises an avatar moving along the first route 220 in an extended reality environment, the avatar representing the second user 135 moving along the second route 120. Because all features of newly amended independent claims 1 and 11 are taught by Viner, as discussed above and in the rejections, there are no deficiencies, as argued, for which other cited references are required to cure and the claims are not allowable. In response to the arguments regarding “Positioning Mechanism in Viner” and operable for “Geographically Separated Locations”, “the rendering of the avatar”, technically compatible “with, e.g., a geographically separated concept, such as the limitation of claim 1”, avatar and “’moving along the first route’ as claimed”, “rendered invisible or thousands of miles off-screen”, and “’providing an avatar along the first route based on the "second plurality of locations’ of a geographically separated route” (Remarks, p. 9), the Office respectfully disagrees and submits that the argument is not commensurate with the rejections and the relevant claimed features are taught by Viner. For example, Viner discloses: [0006] Embodiments described herein relate to an AR feature where a virtual… pacing companion is presented to a user while he is engaged in an activity (e.g., jogging...) in order to provide the user with visual progress comparison in real-time. In particular embodiments, the user's locations may be tracked and used for positioning a virtual reference object (e.g., an avatar) for display on a user's AR device while the user is engaged in the activity. That virtual reference object, such as a virtual avatar…, may be presented to the user as an AR effect that is integrated with real-world scenes. Based on the relative position of that virtual reference object (e.g., the virtual reference object may appear ahead of or behind the user), the user may gauge how well he is currently performing. … [0019] FIG. 1 illustrates a snapshot of an example scenario 100 during a first movement session, in accordance with particular embodiments. The scenario 100 shows a user 135 jogging along a route 120. Using an AR application, the user 135 may begin recording (e.g., stored locally on the device running the AR application or a mobile device or stored remotely on a server) a movement session at starting location 110…. The starting location 100 is represented by (x0, y0), which may represent the GPS location measured by the user's mobile device (e.g., longitude and latitude coordinates), and the start time of the movement session is represented by t0. Once the movement session begins, the AR application may track and record the user's movements and the times at which the locations are recorded. In the illustrated snapshot scenario 100, the user 135 may have jogged to location 130, represented by (xi, yi), at time ti. The user 135 may continue to jog until the movement session ends, at which time the recording may cease. The ending location 140, denoted by (xN, yN) and associated with time tN, represents the location at which the movement session ends…. At the end of the movement session, N number of locations and their associated times may be recorded and associated with that movement session. The recordings may include data of the user's movement during the movement session (e.g., as the user 135 jogs along the route 120), such as a plurality of locations from the starting location 110 to the ending location 140 and the time at which each of the plurality of locations is recorded. … [0022] The data recorded in the first movement session (e.g., as described with reference to FIG. 1) may be used to generate a virtual companion in a second movement session…. In particular embodiments, the user in the second movement session may be different from the user in the first movement session. For example, after user A recorded his movement data in the first movement session, the recorded data may be used by user B in a second movement session so that user B can see how he does compared to user A. In particular embodiments, the movement data recorded in two movement session may be shared in real-time to enable two users to compare their performances as they are both engaged in the activity (e.g., two joggers may race against each other in substantially real-time). For example, the AR applications executing on the users' respective devices may communicate through a server (e.g., a social networking server). The AR applications may begin respective movement sessions at the same time and share the recorded movement data. For example, at time user A's movement data (e.g., distance traveled since the start time) may be sent to user B's device and user B's movement data (e.g., distance traveled since the start time) may be sent to user A's device. Using the movement data, each user's AR application may render a virtual companion that represents the other user, giving the users a virtual racing experience. [0023] FIG. 2 illustrates a snapshot of an example scenario 200 during a second movement session…. The example shows the user 235 jogging along a route 220, which could be the same route 120 shown in FIG. 1 or a different one…. If the routes are different, particular embodiments may determine the virtual companion’s 255 relative position based on distance measurements. The distance measurements may be… derived from recordings of GPS coordinates. To derive distance measurements from recorded GPS coordinates, the AR application may translate the coordinates into distance measurements by computing the distance traveled between successive GPS locations. … [0025] While the user 235 is jogging in the second movement session, the AR application may generate a virtual companion 255 in the field of view of the user 235 to visually show how his 235 current performance compares with the performance recorded in the first movement session. For example, in the second movement session, the user 235 may start tracking his current movement by invoking the AR application at time t0′. At the user's 235 current location 230, the AR application may compute where the virtual companion 255 should appear relative to the user 235 at that instant. For example, the AR application can determine that the user’s 235 current running duration in the second movement session is d′=ti−t0′ (in certain embodiments, timing information may alternatively be tracked as session duration). Based on this duration d′, the AR application may query the recorded first movement session to determine where the user 135 was in the first movement session after jogging for d′ time. For example, from the recorded first movement session, the AR application may determine that d′=d, where d=ti−t0 is the corresponding relative time with respect to the start time t0 in the first movement session. The location (xi, yi) 130 recorded at time ti in the first movement session can be retrieved from the tracking record and used to position the virtual companion 255. For example, the companion 255 representing the user 135 at the location 130 in the first movement session can be conceptually placed relative to the user 235 in the second movement session based on a relative distance between the retrieved past location 130 and the current location 230 of the user 235.” (Emphasis added). Viner also discloses: [0008] In general, embodiments described herein relate to an AR feature that provides real-time visual comparison of a user's current activity against past recorded activities of the user himself or past or current activities of others. An AR application may track and record a first movement session of a user (or another person, such as a friend of the user). The tracking record may comprise a plurality of locations with respect to a first start time of the first movement session, and each location is associated with a time at which the location may be recorded. Then, the AR application can be invoked again to track the user's current location in a second movement session with respect to a second start time. Based on the user's current location, the AR application may determine how the virtual companion should appear in the field of view of the user. The AR application, using the tracking record, may first determine a past location (e.g., GPS coordinates) or travel distance (e.g., half a mile) of the user at a corresponding moment in the first movement session based on the current time in the second session. For example, if the user has jogged for 3 minutes in the current jogging session since the start of the session, the computing system may retrieve the user's past location or traveled distance when he was 3 minutes into his previous jog. The past location or travel distance of the user in the prior session may be used to determine a relative position between the user's current location and the virtual companion, which in turn may be used to determine where the virtual companion should be in the real-world scene. The depth (i.e., distance from the current user) and orientation of the virtual companion can be determined based on the relative position, and the position of the virtual companion on the user's AR display can be determined by the AR application. The appearance of the virtual companion may be adjusted accordingly to be realistically displayed on the user's AR display (e.g. AR glasses or smartphone screen) based on the determined virtual companion's position. … [0027] The location 250 of the virtual companion 255 relative to the user 235 may be used to simulate depth-perception cues for the virtual companion 255. The AR application may geometrically adjust the appearance of the virtual companion 255 as it appears on the user's two-dimensional display. For example, when the location 250 of the virtual companion 255 is far ahead of the current location 230 of the user 235, the depth of the virtual companion 255 in the field of view of the user 235 is deeper. The size of the virtual companion 255 may be scaled with respect to the depth, such that the virtual companion 255 may appear smaller in the field of view of the user 235. For example, if the virtual companion 255 is to be 100 feet away from the user 235, the AR application may geometrically scale the size of the virtual companion 255 to appear smaller (e.g., a 5-foot-tall virtual companion 255 may be scaled to be 1 inch-tall) in the field of view of the user 235. As another example, when the relative distance between the location 250 of the virtual companion 255 and the location 230 of the user 235 is approximately the same, the depth of the virtual companion 255 may equal to zero or almost zero. As such, the size of the virtual companion 255 may not need to be scaled smaller when it is displayed on the user's AR device. The virtual companion's dimension may be scaled geometrically based on the depth thereof or based on other mathematically models (e.g., 3D model), to create a realistic visual effect of the virtual companion 255. (Emphasis added). As the method of Viner provides a virtual pacing companion avatar to a first user while the first user is engaged in an activity such as a virtual race with a second user jogging on a geographically separated path, the avatar would not be “positioned the literal geographic distance away in the virtual environment” nor “rendered invisible or thousands of miles off screen”, as argued. Thus, as discussed above and in the rejections, Viner teaches the relevant claimed features and the method of Viner is not technically incompatible with a geographically separated concept. Moreover, while the arguments appear suggestive of arguments corresponding to MPEP 2143.01 (V) (The proposed modification cannot render the prior art unsatisfactory for its intended purpose), the Office respectfully notes that the section requires that “[i]f a proposed modification would render the prior art invention being modified unsatisfactory for its intended purpose, there may be no suggestion or motivation to make the proposed modification.” MPEP 2143.01 (V). Here, there is no proposed modification of the prior art. In response to the arguments regarding anticipation, “combination of distinct embodiments”, “different users/routes”, “relies on distance (scalar) data”, and location data, “location data” and “same route to function (comparing x, y coordinates), different route, paragraph [0025], “positioning logic”, different route scenario, and same route scenario, “‘distance-based’ separate route embodiment with the ‘location-based’ same route embodiment”, “the claimed invention”, obviousness argument, anticipation argument, and MPEP § 2131.01 (Remarks, pp. 8 and 9-10), the Office respectfully disagrees and submits that claims 1 and 11 are anticipated by Viner. More specifically, while Viner references “embodiments” and “example scenario(s)”, they are described together as options or alternatives. For example, Viner discloses: [0019] FIG. 1 illustrates a snapshot of an example scenario 100 during a first movement session, in accordance with particular embodiments. The scenario 100 shows a user 135 jogging along a route 120. Using an AR application, the user 135 may begin recording (e.g., stored locally on the device running the AR application or a mobile device or stored remotely on a server) a movement session at starting location 110. For example, at the beginning of the user's run, he may have launched the AR application on his mobile device and indicated that a movement session is to be recorded (e.g., by pressing a button, giving a voice command, or making a gesture to indicate a desire to start the recording). The starting location 100 is represented by (x0, y0), which may represent the GPS location measured by the user's mobile device (e.g., longitude and latitude coordinates), and the start time of the movement session is represented by t0. Once the movement session begins, the AR application may track and record the user's movements and the times at which the locations are recorded. In the illustrated snapshot scenario 100, the user 135 may have jogged to location 130, represented by (xi, yi), at time ti. The user 135 may continue to jog until the movement session ends, at which time the recording may cease. The ending location 140, denoted by (xN, yN) and associated with time tN, represents the location at which the movement session ends. The movement session may be terminated by the user 135 (e.g., by pressing a button, giving a voice command, or making a gesture to indicate a desire to end the session) or it may terminate automatically (e.g., upon satisfying one or more predetermined criteria, such as after reaching a predetermined jogging duration, distance, or destination). At the end of the movement session, N number of locations and their associated times may be recorded and associated with that movement session. The recordings may include data of the user's movement during the movement session (e.g., as the user 135 jogs along the route 120), such as a plurality of locations from the starting location 110 to the ending location 140 and the time at which each of the plurality of locations is recorded. … [0023] FIG. 2 illustrates a snapshot of an example scenario 200 during a second movement session. The scenario 200 illustrates where a virtual companion 255, generated based on the movement data recorded in the first movement session (e.g., as described with reference to FIG. 1), would conceptually be located relative to a user 235 (who could be the same user 135 who recorded the first movement session or a different user). The example shows the user 235 jogging along a route 220, which could be the same route 120 shown in FIG. 1 or a different one. If the routes are the same, particular embodiments may determine the virtual companion's 255 relative position to the user 235 based on GPS coordinates or distance measurements. If the routes are different, particular embodiments may determine the virtual companion’s 255 relative position based on distance measurements. The distance measurements may be what was recorded during the first movement session or derived from recordings of GPS coordinates. To derive distance measurements from recorded GPS coordinates, the AR application may translate the coordinates into distance measurements by computing the distance traveled between successive GPS locations. Alternatively, for the sake of argument, if, as argued, different embodiments are combined, the Office respectfully submits that claims 1 and 11 are anticipated by Viner. See MPEP 2131.02 (III) (“A reference disclosure can anticipate a claim when the reference describes the limitations but ‘d[oes] not expressly spell out’ the limitations as arranged or combined as in the claim, if a person of skill in the art, reading the reference, would ‘at once envisage’ the claimed arrangement or combination.” Kennametal, Inc. v. Ingersoll Cutting Tool Co., 780 F.3d 1376, 1381, 114 USPQ2d 1250, 1254 (Fed. Cir. 2015) (quoting In re Petering, 301 F.2d 676, 681(CCPA 1962))…. See also Nidec Motor Corp. v. Zhongshan Broad Ocean Motor Co., 851 F.3d 1270, 1274, 122 USPQ2d 1116, 1120 (Fed. Cir. 2017) (“… Kennametal addresses whether the disclosure of a limited number of combination possibilities discloses one of the possible combinations.”).”). Here, a person of ordinary skill in the art would “‘at once envisage’ the claimed arrangement or combination” required for anticipation where the options or alternatives are disclosed together with other described options or alternatives. For the reasons discussed above and in the rejections, the pending claims are not allowable. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-2, 4, 8-12, 14, 18-20, and 41-42 are rejected under 35 U.S.C. 102(a)(1) and/or (a)(2) as being anticipated by Viner in US 2020/0151961 A1 (hereinafter Viner). Regarding claim 1, Viner teaches: A method comprising (Viner: FIG. 3 and “[0047] FIG. 3 illustrates an example method for creating a virtual object (e.g., representing a user…) by an AR application in particular embodiments….”): determining, using control circuitry (of 600 in FIG. 6), first route data (220 data in FIG. 2) of a first route (220 in FIG. 2), wherein the first route data comprises a pace of a first user (235 in FIG. 2) moving through a first plurality of locations along the first route (Viner: FIGs. 2-3 and 6, “[0023] FIG. 2 illustrates a snapshot of an example scenario 200 during a second movement session…. The example shows the user 235 jogging along a route 220, which could be the same route 120 shown in FIG. 1 or a different one….”, “[0024] In the example shown, the user's 235 starting location 210 with respect to time in the second movement session can be represented by (x0′, y0′, t0′), wherein t0′ is the time at which the starting location 110 (x0′, y0′) is recorded. The current location 230 of the user 235 in the second movement session with respect to time can be represented by (xi′, yi′, ti′), where ti′ is the time at which the current location 230 (xi′, yi′) is recorded. In the scenario 200 shown, the second movement session will end at location 240 (xM′, yM′) and the corresponding ending time is represented by tM′. At the current location 230, the user's 235 jogging duration can be computed as d′=ti=t0′, where ti′ and t0′ are both timestamps.”, “[0025] While the user 235 is jogging in the second movement session, the AR application may generate a virtual companion 255 in the field of view of the user 235 to visually show how his 235 current performance compares with the performance recorded in the first movement session. For example, in the second movement session, the user 235 may start tracking his current movement by invoking the AR application at time t0′. At the user's 235 current location 230, the AR application may compute where the virtual companion 255 should appear relative to the user 235 at that instant. For example, the AR application can determine that the user's 235 current running duration in the second movement session is d′=ti−t0′ (in certain embodiments, timing information may alternatively be tracked as session duration). Based on this duration d′, the AR application may query the recorded first movement session to determine where the user 135 was in the first movement session after jogging for d′ time. For example, from the recorded first movement session, the AR application may determine that d′=d, where d=ti−t0 is the corresponding relative time with respect to the start time t0 in the first movement session. The location (xi, yi) 130 recorded at time ti in the first movement session can be retrieved from the tracking record and used to position the virtual companion 255. For example, the companion 255 representing the user 135 at the location 130 in the first movement session can be conceptually placed relative to the user 235 in the second movement session based on a relative distance between the retrieved past location 130 and the current location 230 of the user 235.”, “[0073] FIG. 6 illustrates an example computer system 600. In particular embodiments, one or more computer systems 600 perform one or more steps of one or more methods described or illustrated herein….”, and “[0075] In particular embodiments, computer system 600 includes a processor 602, memory 604, storage 606, an input/output (I/O) interface 608, a communication interface 610, and a bus 612. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement.”, see also “[0006]… [T]he user's locations may be tracked and used for positioning a virtual reference object (e.g., an avatar) for display on a user's AR device while the user is engaged in the activity….”, “[0019]… N number of locations and their associated times may be recorded and associated with that movement session [for a user 135 in FIG. 1]. The recordings may include data of the user's movement during the movement session (e.g., as the user 135 jogs along the route 120), such as a plurality of locations from the starting location 110 to the ending location 140 and the time at which each of the plurality of locations is recorded.”, [0022] (virtual race between two users), [0028] and [0034]-[0035] (route condition data), and [0047]); determining, using the control circuitry, second route data (120 data in FIG. 1) of a second route (120 in FIG. 1), wherein the second route data comprises a pace of a second user (135 in FIG. 1) moving through a second plurality of locations (N locations) along the second route and the second plurality of locations is geographically separated from the first plurality of locations (Viner: FIGs. 1-3 and 6, “[0019] FIG. 1 illustrates a snapshot of an example scenario 100 during a first movement session…. The scenario 100 shows a user 135 jogging along a route 120. Using an AR application, the user 135 may begin recording (e.g., stored locally on the device running the AR application or a mobile device or stored remotely on a server) a movement session at starting location 110…. The starting location 100 is represented by (x0, y0), which may represent the GPS location measured by the user's mobile device (e.g., longitude and latitude coordinates), and the start time of the movement session is represented by t0. Once the movement session begins, the AR application may track and record the user's movements and the times at which the locations are recorded. In the illustrated snapshot scenario 100, the user 135 may have jogged to location 130, represented by (xi, yi), at time ti. The user 135 may continue to jog until the movement session ends, at which time the recording may cease. The ending location 140, denoted by (xN, yN) and associated with time tN, represents the location at which the movement session ends…. At the end of the movement session, N number of locations and their associated times may be recorded and associated with that movement session. The recordings may include data of the user's movement during the movement session (e.g., as the user 135 jogs along the route 120), such as a plurality of locations from the starting location 110 to the ending location 140 and the time at which each of the plurality of locations is recorded.”, “[0023] FIG. 2 illustrates a snapshot of an example scenario 200 during a second movement session…. The example shows the user 235 jogging along a route 220, which could be the same route 120 shown in FIG. 1 or a different one….”, “[0073] FIG. 6 illustrates an example computer system 600. In particular embodiments, one or more computer systems 600 perform one or more steps of one or more methods described or illustrated herein….”, and “[0075] In particular embodiments, computer system 600 includes a processor 602, memory 604, storage 606, an input/output (I/O) interface 608, a communication interface 610, and a bus 612. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement.”, see also [0022] (virtual race between two users), [0028] and [0034]-[0035] (route condition data), and [0047]); and providing, using the control circuitry, a pace indicator (255 in FIG. 2) to the first user moving along the first route based on the first route data and the second plurality of locations of the second route data, wherein the pace indicator comprises an avatar moving along the first route in an extended reality environment, the avatar representing the second user moving along the second route (Viner: FIGs. 1-3 and 6, “[0006] Embodiments described herein relate to an AR feature where a virtual… pacing companion is presented to a user while he is engaged in an activity (e.g., jogging...) in order to provide the user with visual progress comparison in real-time. In particular embodiments, the user's locations may be tracked and used for positioning a virtual reference object (e.g., an avatar) for display on a user's AR device while the user is engaged in the activity. That virtual reference object, such as a virtual avatar…, may be presented to the user as an AR effect that is integrated with real-world scenes. Based on the relative position of that virtual reference object (e.g., the virtual reference object may appear ahead of or behind the user), the user may gauge how well he is currently performing.”, “[0019] FIG. 1 illustrates a snapshot of an example scenario 100 during a first movement session, in accordance with particular embodiments. The scenario 100 shows a user 135 jogging along a route 120. Using an AR application, the user 135 may begin recording (e.g., stored locally on the device running the AR application or a mobile device or stored remotely on a server) a movement session at starting location 110…. The starting location 100 is represented by (x0, y0), which may represent the GPS location measured by the user's mobile device (e.g., longitude and latitude coordinates), and the start time of the movement session is represented by t0. Once the movement session begins, the AR application may track and record the user's movements and the times at which the locations are recorded. In the illustrated snapshot scenario 100, the user 135 may have jogged to location 130, represented by (xi, yi), at time ti. The user 135 may continue to jog until the movement session ends, at which time the recording may cease. The ending location 140, denoted by (xN, yN) and associated with time tN, represents the location at which the movement session ends…. At the end of the movement session, N number of locations and their associated times may be recorded and associated with that movement session. The recordings may include data of the user's movement during the movement session (e.g., as the user 135 jogs along the route 120), such as a plurality of locations from the starting location 110 to the ending location 140 and the time at which each of the plurality of locations is recorded.”, “[0022] The data recorded in the first movement session (e.g., as described with reference to FIG. 1) may be used to generate a virtual companion in a second movement session…. In particular embodiments, the user in the second movement session may be different from the user in the first movement session. For example, after user A recorded his movement data in the first movement session, the recorded data may be used by user B in a second movement session so that user B can see how he does compared to user A. In particular embodiments, the movement data recorded in two movement session may be shared in real-time to enable two users to compare their performances as they are both engaged in the activity (e.g., two joggers may race against each other in substantially real-time). For example, the AR applications executing on the users' respective devices may communicate through a server (e.g., a social networking server). The AR applications may begin respective movement sessions at the same time and share the recorded movement data. For example, at time user A's movement data (e.g., distance traveled since the start time) may be sent to user B's device and user B's movement data (e.g., distance traveled since the start time) may be sent to user A's device. Using the movement data, each user's AR application may render a virtual companion that represents the other user, giving the users a virtual racing experience.”, “[0023] FIG. 2 illustrates a snapshot of an example scenario 200 during a second movement session…. The example shows the user 235 jogging along a route 220, which could be the same route 120 shown in FIG. 1 or a different one…. If the routes are different, particular embodiments may determine the virtual companion’s 255 relative position based on distance measurements. The distance measurements may be… derived from recordings of GPS coordinates. To derive distance measurements from recorded GPS coordinates, the AR application may translate the coordinates into distance measurements by computing the distance traveled between successive GPS locations.”, “[0025] While the user 235 is jogging in the second movement session, the AR application may generate a virtual companion 255 in the field of view of the user 235 to visually show how his 235 current performance compares with the performance recorded in the first movement session. For example, in the second movement session, the user 235 may start tracking his current movement by invoking the AR application at time t0′. At the user's 235 current location 230, the AR application may compute where the virtual companion 255 should appear relative to the user 235 at that instant. For example, the AR application can determine that the user’s 235 current running duration in the second movement session is d′=ti−t0′ (in certain embodiments, timing information may alternatively be tracked as session duration). Based on this duration d′, the AR application may query the recorded first movement session to determine where the user 135 was in the first movement session after jogging for d′ time. For example, from the recorded first movement session, the AR application may determine that d′=d, where d=ti−t0 is the corresponding relative time with respect to the start time t0 in the first movement session. The location (xi, yi) 130 recorded at time ti in the first movement session can be retrieved from the tracking record and used to position the virtual companion 255. For example, the companion 255 representing the user 135 at the location 130 in the first movement session can be conceptually placed relative to the user 235 in the second movement session based on a relative distance between the retrieved past location 130 and the current location 230 of the user 235.”, “[0073] FIG. 6 illustrates an example computer system 600. In particular embodiments, one or more computer systems 600 perform one or more steps of one or more methods described or illustrated herein….”, and “[0075] In particular embodiments, computer system 600 includes a processor 602, memory 604, storage 606, an input/output (I/O) interface 608, a communication interface 610, and a bus 612. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement.”, see also “[0017] In particular embodiments, a computing system may be configured to create a virtual “companion” (e.g., for performance comparison or pacing purposes) and integrate the virtual companion as an AR effect into a user's view of a real-world scenes. Based on the relative position of that virtual companion and the user's current location, the user be presented with a visual comparison of his current movement (e.g., jogging… progress) against his own past movement or another user's past or concurrent movement at a corresponding moment in time with respect to a start time. For example, with a pair of AR glasses or a smartphone screen, the user may be able to see the virtual companion running ahead or behind him in the field of view of the user.” and [0048]-[0050]). Regarding claim 2, Viner teaches: The method according to claim 1, the method comprising: determining, concurrently, the pace of the first user moving through the first plurality of locations along the first route and the pace of the second user moving through the second plurality of locations along the second route (Viner: “[0022] The data recorded in the first movement session (e.g., as described with reference to FIG. 1) may be used to generate a virtual companion in a second movement session…. In particular embodiments, the movement data recorded in two movement session may be shared in real-time to enable two users to compare their performances as they are both engaged in the activity…. For example, the AR applications executing on the users' respective devices may communicate through a server (e.g., a social networking server). The AR applications may begin respective movement sessions at the same time and share the recorded movement data. For example, at time user A's movement data (e.g., distance traveled since the start time) may be sent to user B's device and user B's movement data (e.g., distance traveled since the start time) may be sent to user A's device. Using the movement data, each user's AR application may render a virtual companion that represents the other user, giving the users a virtual racing experience.”, see also [0017]; claim 1 above); and providing the pace indicator as the first user moves through the first plurality of locations along the first route and the second user moves through the second plurality of locations along the second route (Viner: Viner: FIGs. 2-3 and “[0022] The data recorded in the first movement session (e.g., as described with reference to FIG. 1) may be used to generate a virtual companion in a second movement session…. In particular embodiments, the movement data recorded in two movement session may be shared in real-time to enable two users to compare their performances as they are both engaged in the activity…. For example, the AR applications executing on the users' respective devices may communicate through a server (e.g., a social networking server). The AR applications may begin respective movement sessions at the same time and share the recorded movement data. For example, at time user A's movement data (e.g., distance traveled since the start time) may be sent to user B's device and user B's movement data (e.g., distance traveled since the start time) may be sent to user A's device. Using the movement data, each user's AR application may render a virtual companion that represents the other user, giving the users a virtual racing experience.”, see also [0017]; claim 1 above). Regarding claim 4, Viner teaches: The method according to claim 1, wherein the first route data comprises a condition of the first route, and the second route data comprises a condition of the second route, wherein the condition of each route comprises a physical and/or environmental condition (Viner: “[0022]… In particular embodiments, the movement data recorded in two movement session may be shared in real-time to enable two users to compare their performances as they are both engaged in the activity…. For example, the AR applications executing on the users' respective devices may communicate through a server (e.g., a social networking server). The AR applications may begin respective movement sessions at the same time and share the recorded movement data. For example, at time user A's movement data (e.g., distance traveled since the start time) may be sent to user B's device and user B's movement data (e.g., distance traveled since the start time) may be sent to user A's device. Using the movement data, each user's AR application may render a virtual companion that represents the other user, giving the users a virtual racing experience.”, “[0028]… By processing images of the user's surrounding, the AR application may identify objects in the user's view, detect the position, topology, and depth (e.g., distance) of the objects, and place or track the virtual companion in the scene accordingly. In particular embodiments, the AR application may determine the 3-dimensional layout of the surrounding using various techniques…. The depth information of objects in the real-world may be used to more accurately position the virtual companion 255. For example, if the relative distance between the virtual companion 255 and the user 235 is 10 feet, the AR application may position the virtual companion 255 based on other objects that are known to be approximately 10 feet from the user 235 (e.g., a ground segment or a tree may be 10 feet away). The AR application may also interpolate known distance information to approximate the desired ground position. For example, if a tree that is 11 feet away and a street sign that is 9 feet away, the ground between would be approximately 10 feet away.”, “[0034] As another example of providing the appropriate visual cues, the virtual companion's position within the field of view of the user may be placed according to where the road appears. For instance, if the virtual companion's position in the field of view of the user is approximately centered when the user is running on leveled ground, the virtual companion may appear higher when the user runs on an upward-sloping road (or appear lower when running downhill), even though the relative distance between the user's current location in the second movement session and the past location of the user in the first movement session may be the same in both scenarios.”, and “[0035] In particular embodiments, object-recognition and depth-sensing technology may be used to identify objects in the real-world scene (e.g., roads, buildings, etc.) so that the virtual companion may be integrated with the scene more realistically. For example, if the virtual companion is to represent a jogger, the system may try to place the virtual companion on a real-world road segment that is within the user's field of view. Based on the relative distance between the virtual companion and the user, the system may identify… a road segment that is at the desired distance from the user. Based on this determination, the system may have the virtual companion appear on top of that road segment. In particular embodiments, object-recognition and depth-sensing technology may be used to generate a contour map of the terrain within the user's field of view. For example, object-recognition technology may be used to identify roads, and depth-sensing technology may be used to determine how far points along the road or road segments are from the user. Such information may be used to generate a contour map. A contour mapping of the real scene in the user's view can enable the AR application to correctly place the virtual object in the user's view based on a determination of the appropriate ground segment for the virtual companion to “touch” on. For example, based on the contour map, the computing system may be able to identify a road segment in the user's 2D field of view that is of the desired distance away from the user for placing the virtual companion. By knowing the size of the virtual companion and the position of the virtual companion in the field of the user's view in the contour map, the tracking system may place the virtual companion in the field of the user's view so that the virtual companion may be displayed realistically on the user's AR display.”). Regarding claim 8, Viner teaches: The method according to claim 1, the method comprising: determining a position of the avatar in the extended reality environment relative to the first user based on the second route data (Viner: “[0025]… [I]n the second movement session, the user 235 may start tracking his current movement by invoking the AR application at time t0′. At the user's 235 current location 230, the AR application may compute where the virtual companion 255 should appear relative to the user 235 at that instant. For example, the AR application can determine that the user's 235 current running duration in the second movement session is d′=ti−t0′ (in certain embodiments, timing information may alternatively be tracked as session duration). Based on this duration d′, the AR application may query the recorded first movement session to determine where the user 135 was in the first movement session after jogging for d′ time. For example, from the recorded first movement session, the AR application may determine that d′=d, where d=ti−t0 is the corresponding relative time with respect to the start time t0 in the first movement session. The location (xi,yi) 130 recorded at time ti in the first movement session can be retrieved from the tracking record and used to position the virtual companion 255. For example, the companion 255 representing the user 135 at the location 130 in the first movement session can be conceptually placed relative to the user 235 in the second movement session based on a relative distance between the retrieved past location 130 and the current location 230 of the user 235.”, see also [0017], “[0018]… [W]hen the user is jogging slower in the current session than he was in a recorded prior session, the virtual companion may be seen jogging ahead of the user on the user's AR glasses. As another example, when the user is faster in the current session than he was in the recorded prior session, the virtual companion may be seen jogging behind the user when the user turns around.”, and [0023]). Regarding claim 9, Viner teaches: The method according to claim 8, wherein a scaling function is applied to the second route data (Viner: “[0027]… The AR application may geometrically adjust the appearance of the virtual companion 255 as it appears on the user's two-dimensional display. For example, when the location 250 of the virtual companion 255 is far ahead of the current location 230 of the user 235, the depth of the virtual companion 255 in the field of view of the user 235 is deeper. The size of the virtual companion 255 may be scaled with respect to the depth, such that the virtual companion 255 may appear smaller in the field of view of the user 235…. [W]hen the relative distance between the location 250 of the virtual companion 255 and the location 230 of the user 235 is approximately the same, the depth of the virtual companion 255 may equal to zero or almost zero. As such, the size of the virtual companion 255 may not need to be scaled smaller when it is displayed on the user's AR device….”, see also “[0039]… [T]he scale of the virtual companion may also depend on the relative distance between the virtual companion and the user, so that the virtual companion appears smaller when it is farther and larger when it is closer to the user.”). Regarding claim 10, Viner teaches: The method according to claim 1, wherein the pace indicator is provided in real time or near real time (Viner: “[0047] FIG. 3 illustrates an example method for creating a virtual object (e.g., representing a user…) by an AR application in particular embodiments. The method 300 may begin at step 310, where a computing system (e.g., mobile phone or AR glasses) running an AR application may access (e.g.,… remote access, such as via a social networking system) a tracking record of a first user during a first movement session. The tracking record may comprise a plurality of locations of the first user and associated time measurements during the first movement session. For example, the tracking record may comprise a plurality of locations, corresponding to tracked movements of the first user during the first movement session, and each of the plurality of locations may be associated with a time at which the location is recorded. The plurality of the locations may be tracked by GPS and/or elevation sensors, traced as a route of the first user, and saved in the tracking record. The recorded locations may, additionally or alternatively, be represented by the distance traveled since the start of the movement session, and the associated time may be represented by the duration since the start of the movement session. The tracking records… can be real-time location data accessible via internet or shared on the social networking system. The tracking records… can be real-time location data of other users. The first tracking record may have a starting location and an ending location corresponds to a start time and ending time, respectively.”, see also [0017] and [0022]). Regarding claim 11, this claim is rejected under similar rationale as claim 1 above. However, it is noted that claim 11 differs from claim 1 above in that the following are recited: A system comprising control circuitry configured to: wherein the second route data comprises the pace of the second user moving through the second plurality of locations along the second route and the second route is geographically separated from the first plurality of locations; and provide the pace indicator to the first user moving through the first plurality of locations along the first route based on the first route data and the second route data, wherein the pace indicator comprises the avatar moving along the first route in the extended reality environment, the avatar representing the second user moving through the second plurality of locations along the second route. Viner teaches: A system (600 in FIG. 6) comprising control circuitry configured to (Viner: FIG. 6, “[0073] FIG. 6 illustrates an example computer system 600. In particular embodiments, one or more computer systems 600 perform one or more steps of one or more methods described or illustrated herein….”, and “[0075] In particular embodiments, computer system 600 includes a processor 602, memory 604, storage 606, an input/output (I/O) interface 608, a communication interface 610, and a bus 612. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement.”; claim 1 above): wherein the second route data comprises a pace of a second user moving through a second plurality of locations along the second route and the second route is geographically separated from the first plurality of locations (Viner: FIGs. 1-3 and 6, “[0019] FIG. 1 illustrates a snapshot of an example scenario 100 during a first movement session…. The scenario 100 shows a user 135 jogging along a route 120. Using an AR application, the user 135 may begin recording (e.g., stored locally on the device running the AR application or a mobile device or stored remotely on a server) a movement session at starting location 110…. The starting location 100 is represented by (x0, y0), which may represent the GPS location measured by the user's mobile device (e.g., longitude and latitude coordinates), and the start time of the movement session is represented by t0. Once the movement session begins, the AR application may track and record the user's movements and the times at which the locations are recorded. In the illustrated snapshot scenario 100, the user 135 may have jogged to location 130, represented by (xi, yi), at time ti. The user 135 may continue to jog until the movement session ends, at which time the recording may cease. The ending location 140, denoted by (xN, yN) and associated with time tN, represents the location at which the movement session ends…. At the end of the movement session, N number of locations and their associated times may be recorded and associated with that movement session. The recordings may include data of the user's movement during the movement session (e.g., as the user 135 jogs along the route 120), such as a plurality of locations from the starting location 110 to the ending location 140 and the time at which each of the plurality of locations is recorded.”, and “[0023] FIG. 2 illustrates a snapshot of an example scenario 200 during a second movement session…. The example shows the user 235 jogging along a route 220, which could be the same route 120 shown in FIG. 1 or a different one….”; see also claim 1 above); and provide a pace indicator to the first user moving through the first plurality of locations along the first route based on the first route data and the second route data, wherein the pace indicator comprises an avatar moving along the first route in an extended reality environment, the avatar representing the second user moving through the second plurality of locations along the second route (Viner: FIGs. 1-3 and 6, “[0006] Embodiments described herein relate to an AR feature where a virtual… pacing companion is presented to a user while he is engaged in an activity (e.g., jogging...) in order to provide the user with visual progress comparison in real-time. In particular embodiments, the user's locations may be tracked and used for positioning a virtual reference object (e.g., an avatar) for display on a user's AR device while the user is engaged in the activity. That virtual reference object, such as a virtual avatar…, may be presented to the user as an AR effect that is integrated with real-world scenes. Based on the relative position of that virtual reference object (e.g., the virtual reference object may appear ahead of or behind the user), the user may gauge how well he is currently performing.”, “[0019] FIG. 1 illustrates a snapshot of an example scenario 100 during a first movement session, in accordance with particular embodiments. The scenario 100 shows a user 135 jogging along a route 120. Using an AR application, the user 135 may begin recording (e.g., stored locally on the device running the AR application or a mobile device or stored remotely on a server) a movement session at starting location 110…. The starting location 100 is represented by (x0, y0), which may represent the GPS location measured by the user's mobile device (e.g., longitude and latitude coordinates), and the start time of the movement session is represented by t0. Once the movement session begins, the AR application may track and record the user's movements and the times at which the locations are recorded. In the illustrated snapshot scenario 100, the user 135 may have jogged to location 130, represented by (xi, yi), at time ti. The user 135 may continue to jog until the movement session ends, at which time the recording may cease. The ending location 140, denoted by (xN, yN) and associated with time tN, represents the location at which the movement session ends…. At the end of the movement session, N number of locations and their associated times may be recorded and associated with that movement session. The recordings may include data of the user's movement during the movement session (e.g., as the user 135 jogs along the route 120), such as a plurality of locations from the starting location 110 to the ending location 140 and the time at which each of the plurality of locations is recorded.”, “[0022] The data recorded in the first movement session (e.g., as described with reference to FIG. 1) may be used to generate a virtual companion in a second movement session…. In particular embodiments, the user in the second movement session may be different from the user in the first movement session. For example, after user A recorded his movement data in the first movement session, the recorded data may be used by user B in a second movement session so that user B can see how he does compared to user A. In particular embodiments, the movement data recorded in two movement session may be shared in real-time to enable two users to compare their performances as they are both engaged in the activity (e.g., two joggers may race against each other in substantially real-time). For example, the AR applications executing on the users' respective devices may communicate through a server (e.g., a social networking server). The AR applications may begin respective movement sessions at the same time and share the recorded movement data. For example, at time user A's movement data (e.g., distance traveled since the start time) may be sent to user B's device and user B's movement data (e.g., distance traveled since the start time) may be sent to user A's device. Using the movement data, each user's AR application may render a virtual companion that represents the other user, giving the users a virtual racing experience.”, “[0023] FIG. 2 illustrates a snapshot of an example scenario 200 during a second movement session…. The example shows the user 235 jogging along a route 220, which could be the same route 120 shown in FIG. 1 or a different one….”, and “[0025] While the user 235 is jogging in the second movement session, the AR application may generate a virtual companion 255 in the field of view of the user 235 to visually show how his 235 current performance compares with the performance recorded in the first movement session. For example, in the second movement session, the user 235 may start tracking his current movement by invoking the AR application at time t0′. At the user's 235 current location 230, the AR application may compute where the virtual companion 255 should appear relative to the user 235 at that instant. For example, the AR application can determine that the user's 235 current running duration in the second movement session is d′=ti−t0′ (in certain embodiments, timing information may alternatively be tracked as session duration). Based on this duration d′, the AR application may query the recorded first movement session to determine where the user 135 was in the first movement session after jogging for d′ time. For example, from the recorded first movement session, the AR application may determine that d′=d, where d=ti−t0 is the corresponding relative time with respect to the start time t0 in the first movement session. The location (xi, yi) 130 recorded at time ti in the first movement session can be retrieved from the tracking record and used to position the virtual companion 255. For example, the companion 255 representing the user 135 at the location 130 in the first movement session can be conceptually placed relative to the user 235 in the second movement session based on a relative distance between the retrieved past location 130 and the current location 230 of the user 235.”, see also “[0017] In particular embodiments, a computing system may be configured to create a virtual “companion” (e.g., for performance comparison or pacing purposes) and integrate the virtual companion as an AR effect into a user's view of a real-world scenes. Based on the relative position of that virtual companion and the user's current location, the user be presented with a visual comparison of his current movement (e.g., jogging… progress) against his own past movement or another user's past or concurrent movement at a corresponding moment in time with respect to a start time. For example, with a pair of AR glasses or a smartphone screen, the user may be able to see the virtual companion running ahead or behind him in the field of view of the user.” and [0048]-[0050]; see also claim 1 above). Regarding claim 12, this claim is rejected under similar rationale as claim 2 above. Regarding claim 14, this claim is rejected under similar rationale as claim 4 above. Regarding claim 18, this claim is rejected under similar rationale as claim 8 above. Regarding claim 19, this claim is rejected under similar rationale as claim 9 above. Regarding claim 20, this claim is rejected under similar rationale as claim 10 above. Regarding claim 41, Viner teaches: The method of claim 1, further comprising: determining a current location of the first user along the first route and a current location of the second user along the second route, wherein the first route data and the second route data are determined based on the respective current locations (Viner: “[0019] FIG. 1 illustrates a snapshot of an example scenario 100 during a first movement session…. The scenario 100 shows a user 135 jogging along a route 120. Using an AR application, the user 135 may begin recording (e.g., stored locally on the device running the AR application or a mobile device or stored remotely on a server) a movement session at starting location 110…. The starting location 100 is represented by (x0, y0), which may represent the GPS location measured by the user's mobile device (e.g., longitude and latitude coordinates), and the start time of the movement session is represented by t0. Once the movement session begins, the AR application may track and record the user's movements and the times at which the locations are recorded. In the illustrated snapshot scenario 100, the user 135 may have jogged to location 130, represented by (xi, yi), at time ti. The user 135 may continue to jog until the movement session ends, at which time the recording may cease. The ending location 140, denoted by (xN, yN) and associated with time tN, represents the location at which the movement session ends…. At the end of the movement session, N number of locations and their associated times may be recorded and associated with that movement session. The recordings may include data of the user's movement during the movement session (e.g., as the user 135 jogs along the route 120), such as a plurality of locations from the starting location 110 to the ending location 140 and the time at which each of the plurality of locations is recorded.”, “[0022]… [T]he movement data recorded in two movement session may be shared in real-time to enable two users to compare their performances as they are both engaged in the activity (e.g., two joggers may race against each other in substantially real-time). For example, the AR applications executing on the users' respective devices may communicate through a server (e.g., a social networking server). The AR applications may begin respective movement sessions at the same time and share the recorded movement data. For example, at time user A's movement data (e.g., distance traveled since the start time) may be sent to user B's device and user B's movement data (e.g., distance traveled since the start time) may be sent to user A's device. Using the movement data, each user's AR application may render a virtual companion that represents the other user, giving the users a virtual racing experience.”, and “[0023] FIG. 2 illustrates a snapshot of an example scenario 200 during a second movement session…. The example shows the user 235 jogging along a route 220, which could be the same route 120 shown in FIG. 1 or a different one…. If the routes are different, particular embodiments may determine the virtual companion's 255 relative position based on distance measurements. The distance measurements may be what was recorded during the first movement session or derived from recordings of GPS coordinates. To derive distance measurements from recorded GPS coordinates, the AR application may translate the coordinates into distance measurements by computing the distance traveled between successive GPS locations.”). Regarding claim 42, Viner teaches: The method of claim 1, further comprising: determining a corresponding location on the second route based on progress of the first user along the first route, wherein the pace indicator is provided based on the corresponding location on the second route (Viner: FIGs. 1-3, “[0019] FIG. 1 illustrates a snapshot of an example scenario 100 during a first movement session, in accordance with particular embodiments. The scenario 100 shows a user 135 jogging along a route 120. Using an AR application, the user 135 may begin recording (e.g., stored locally on the device running the AR application or a mobile device or stored remotely on a server) a movement session at starting location 110…. The starting location 100 is represented by (x0, y0), which may represent the GPS location measured by the user's mobile device (e.g., longitude and latitude coordinates), and the start time of the movement session is represented by t0. Once the movement session begins, the AR application may track and record the user's movements and the times at which the locations are recorded. In the illustrated snapshot scenario 100, the user 135 may have jogged to location 130, represented by (xi, yi), at time ti. The user 135 may continue to jog until the movement session ends, at which time the recording may cease. The ending location 140, denoted by (xN, yN) and associated with time tN, represents the location at which the movement session ends…. At the end of the movement session, N number of locations and their associated times may be recorded and associated with that movement session. The recordings may include data of the user's movement during the movement session (e.g., as the user 135 jogs along the route 120), such as a plurality of locations from the starting location 110 to the ending location 140 and the time at which each of the plurality of locations is recorded.”, “[0022] The data recorded in the first movement session (e.g., as described with reference to FIG. 1) may be used to generate a virtual companion in a second movement session…. In particular embodiments, the user in the second movement session may be different from the user in the first movement session. For example, after user A recorded his movement data in the first movement session, the recorded data may be used by user B in a second movement session so that user B can see how he does compared to user A. In particular embodiments, the movement data recorded in two movement session may be shared in real-time to enable two users to compare their performances as they are both engaged in the activity (e.g., two joggers may race against each other in substantially real-time). For example, the AR applications executing on the users' respective devices may communicate through a server (e.g., a social networking server). The AR applications may begin respective movement sessions at the same time and share the recorded movement data. For example, at time user A's movement data (e.g., distance traveled since the start time) may be sent to user B's device and user B's movement data (e.g., distance traveled since the start time) may be sent to user A's device. Using the movement data, each user's AR application may render a virtual companion that represents the other user, giving the users a virtual racing experience.”, “[0023] FIG. 2 illustrates a snapshot of an example scenario 200 during a second movement session…. The example shows the user 235 jogging along a route 220, which could be the same route 120 shown in FIG. 1 or a different one…. If the routes are different, particular embodiments may determine the virtual companion’s 255 relative position based on distance measurements. The distance measurements may be… derived from recordings of GPS coordinates. To derive distance measurements from recorded GPS coordinates, the AR application may translate the coordinates into distance measurements by computing the distance traveled between successive GPS locations.”, and “[0025] While the user 235 is jogging in the second movement session, the AR application may generate a virtual companion 255 in the field of view of the user 235 to visually show how his 235 current performance compares with the performance recorded in the first movement session. For example, in the second movement session, the user 235 may start tracking his current movement by invoking the AR application at time t0′. At the user's 235 current location 230, the AR application may compute where the virtual companion 255 should appear relative to the user 235 at that instant. For example, the AR application can determine that the user’s 235 current running duration in the second movement session is d′=ti−t0′ (in certain embodiments, timing information may alternatively be tracked as session duration). Based on this duration d′, the AR application may query the recorded first movement session to determine where the user 135 was in the first movement session after jogging for d′ time. For example, from the recorded first movement session, the AR application may determine that d′=d, where d=ti−t0 is the corresponding relative time with respect to the start time t0 in the first movement session. The location (xi, yi) 130 recorded at time ti in the first movement session can be retrieved from the tracking record and used to position the virtual companion 255. For example, the companion 255 representing the user 135 at the location 130 in the first movement session can be conceptually placed relative to the user 235 in the second movement session based on a relative distance between the retrieved past location 130 and the current location 230 of the user 235.”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claim 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Viner. Regarding claim 3, Viner teaches: The method according to claim 1, the method comprising: an event (race) comprising the first user moving through the first plurality of locations along the first route and the second user moving through the second plurality of locations along the second route (Viner: “[0022] The data recorded in the first movement session (e.g., as described with reference to FIG. 1) may be used to generate a virtual companion in a second movement session… In particular embodiments, the movement data recorded in two movement session may be shared in real-time to enable two users to compare their performances as they are both engaged in the activity (e.g., two joggers may race against each other in substantially real-time). For example, the AR applications executing on the users' respective devices may communicate through a server (e.g., a social networking server). The AR applications may begin respective movement sessions at the same time and share the recorded movement data. For example, at time user A's movement data (e.g., distance traveled since the start time) may be sent to user B's device and user B's movement data (e.g., distance traveled since the start time) may be sent to user A's device. Using the movement data, each user's AR application may render a virtual companion that represents the other user, giving the users a virtual racing experience.”, see also FIGs. 4-5); and initiating a communication link between the first user and the second user for the event (Viner: “[0022] The data recorded in the first movement session (e.g., as described with reference to FIG. 1) may be used to generate a virtual companion in a second movement session… In particular embodiments, the movement data recorded in two movement session may be shared in real-time to enable two users to compare their performances as they are both engaged in the activity (e.g., two joggers may race against each other in substantially real-time). For example, the AR applications executing on the users' respective devices may communicate through a server (e.g., a social networking server). The AR applications may begin respective movement sessions at the same time and share the recorded movement data. For example, at time user A's movement data (e.g., distance traveled since the start time) may be sent to user B's device and user B's movement data (e.g., distance traveled since the start time) may be sent to user A's device. Using the movement data, each user's AR application may render a virtual companion that represents the other user, giving the users a virtual racing experience.”, see also FIG. 4). However, it is noted that Viner does not teach: scheduling the event, but which would have been obvious to include, such that Viner as modified teaches: scheduling an event comprising the first user moving through the first plurality of locations along the first route and the second user moving through the second plurality of locations along the second route, so two users can schedule a time to race against one another. (Viner: see FIGs. 4-5). Regarding claim 13, Viner is modified in the same manner and for the same reason set forth in the discussion of claim 3 above. Thus, claim 13 is rejected under similar rationale as claim 3 above. Claims 5-6 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Viner in view of Hogue in US 2019/0311539 A (hereinafter Hogue). Regarding claim 5, Viner teaches: The method according to claim 4. However, it is noted that Viner does not teach: the method comprising: determining a difference between the condition of the first route and the condition of the second route; and updating the pace indicator in response to the difference between the condition of the first route and the condition of the second route being above a difference threshold. Hogue teaches: determining a difference between a condition of a first route (of a first user) and a condition of a second route (of the first user, or a second user) (Hogue: determining as claimed in determining user lag due to slowing pace corresponding to, e.g., avoiding puddles, ice, sharp objects, and slippery surfaces, or stopping to pet a dog on the path or follow traffic laws, condition of the first route that is not a condition of the second route; see FIGs. 6A-B and 6E-F, “[0055]… [T]he object recognition and/or object tracking can be utilized by the eyewear processing system 150 to detect physical barriers, people, and/or other physical impedances or obstructions…. For example,… a puddle,… an icy portion of the path,… [and/or] a dog along the path….”, “[0056]… [T]he detected barriers can correspond to potential hazards for the user…. For example, puddles, ice, sharp objects,… [and/or] slippery surfaces….”, “[0057]… [T]he image sensor can detect traffic signals… [and/or] traffic signs….”, “[0059] The location sensor 264 is operable to collect location data corresponding to, or used to determine, the current position of the user….”, “[0060] The location data can be used to track the route of the fitness activity to generate route data… that includes a plurality of timestamped locations. The location data can be used to determine navigation data with respect to a predetermined route and/or predetermined destination corresponding to the fitness activity….”, “[0072]… [T]he eyewear processing system 150 can determine that the user is lagging farther behind the virtual fitness partner more than a configured threshold….”, “[0073] Detection of such trigger conditions can induce a corresponding fitness partner action….”, and “[0074] The fitness partner actions can also include a change of pace… of the virtual fitness partner. Changes in virtual fitness partner velocity can be induced, for example, based on detecting a change of pace… of the transit velocity of the user,… based on the physical surroundings detected by the eyewear processing system 150, and/or can based on other input data to the eyewear processing system 150.”, and “[0152]… [T]he user can set their virtual fitness partner to match a pace of a friends run. For example, the user can select from a friend from a plurality of connections indicated in their user profile data, where the plurality of connections corresponds to a plurality of other user profiles of the eyewear server system. In the same fashion as viewing and selecting from workout entries of their own fitness activity history data, the user can view route data… corresponding to some or all of a plurality of workout entries of some or all of a plurality of connections. For example, the user can select a particular… route of a friend's prior transit, collected by an eyewear processing system 150 worn by the friend during the friend's prior transit, and stored in the friend’s fitness activity history data by the eyewear server system 295….”, see also FIGs. 1-3A, 3C, 3E, 3G, and 7-8, [0032], [0035], [0039], [0063], [0070], [0083], [0106]-[0107], and [0159]; also, it would have been obvious to include the claimed features since it would have been within the general skill of one of ordinary skill in the art to select features on the basis of their suitability for the intended use to determine whether to update a pace indicator); and updating a pace indicator in response to the difference between the condition of the first route and the condition of the second route being above a difference threshold (Hogue: difference threshold corresponding to triggering trigger condition; i.e., updating as claimed where user lag due to slowing pace corresponding to, e.g., avoiding puddles, ice, sharp objects, and slippery surfaces, or stopping to pet a dog on the path or follow traffic laws, condition of the first route that is not a condition of the second route; “[0072] The plurality of event trigger conditions can also be based on the transit pace of the user…. For example, the eyewear processing system 150 can determine that the user is lagging farther behind the virtual fitness partner more than a configured threshold….”, “[0073] Detection of such trigger conditions can induce a corresponding fitness partner action.….”, “[0074] The fitness partner actions can also include a change of pace… of the virtual fitness partner. Changes in virtual fitness partner velocity can be induced, for example, based on detecting a change of pace… of the transit velocity of the user,… based on the physical surroundings detected by the eyewear processing system 150, and/or can based on other input data to the eyewear processing system 150.”, see also [0055]-[0057], [0070], and [0106]-[0107]; also, it would have been obvious to include the claimed features since it would have been within the general skill of one of ordinary skill in the art to select features on the basis of their suitability for the intended use to account for different route conditions). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to include: the features taught by Hogue, such that Viner as modified teaches: the method comprising: determining a difference between the condition of the first route and the condition of the second route (first and second routes conditions of Viner combined with determining a difference and first and second route conditions of Hogue); and updating the pace indicator in response to the difference between the condition of the first route and the condition of the second route being above a difference threshold (pace indicator and first and second routes conditions of Viner combined with updating a pace indicator, difference between first and second route conditions, and difference threshold of Hogue), to maintain a virtual fitness partner within view of a user where route obstacles or delays cause the user to lag behind the virtual fitness partner. Regarding claim 6, Viner teaches: The method according to claim 1. However, it is noted that Viner does not teach: the method comprising: determining an effort level of the first user moving through the first plurality of locations along the first route; determining an effort level of the second user moving through the second plurality of locations along the second route; determining whether there is a difference between the effort level of the first user moving through the first plurality of locations along the first route and the effort level of the second user moving through the second plurality of locations along the second route; and in response to determining that there is a difference, updating the pace indicator. Hogue teaches: a method comprising (Hogue: FIG. 7 and [0167], see also FIG. 8 and [0171]): determining an effort level of a first user moving through a first plurality of locations along a first route (Hogue: FIGs. 3A, 3C, 3E, 3G, and 7, “[0059] The location sensor 264 is operable to collect location data corresponding to, or used to determine, the current position of the user….”, “[0060] The location data can be used to track the route of the fitness activity to generate route data… that includes a plurality of timestamped locations. The location data can be used to determine navigation data with respect to a predetermined route and/or predetermined destination corresponding to the fitness activity.…”, “[0078]… [T]he appearance data of the virtual fitness partner is depicted differently in accordance with the pace of the partner velocity vector of the virtual fitness partner. For example, the step stride and/or stride distance depicted in the display of a jogging, running, and/or hiking virtual fitness partner can increase and decrease as the pace of the partner velocity vector increases and decreases, respectively…. Sweat, perspiration, and/or redness of face of the virtual fitness partner displayed, and/or a heaviness of breathing or a breathing rate of the virtual fitness partner… can also increase and/or decrease with a corresponding increase and/or decrease of pace of the virtual fitness partner, and/or can increase over time, for example, when the length of the fitness activity exceeds a threshold. These features can also increase or decrease as biometric data of the user indicates increasing and/or decreasing heart rate, breath rate, and/or effort put forth by the user. Some or all of these effects can be based on transit velocity and/or biometric data corresponding to the user, and can change in response to detecting event trigger conditions.”, see also FIGs. 1-2 and 8, [0032], [0039], [0052], “[0061] The at least one biometric sensor 266 [in FIG. 2] is operable to collect biometric data corresponding to the user. For example, the at least one biometric sensor 266 can monitor heart rate…. This can be used to determine health data for the user, such as calorie burning data corresponding to the fitness activity….”, [0063], “[0081] Multiple fitness partners can be displayed simultaneously within the display region, traveling in accordance with their own partner velocity data…. As the user slows down, virtual fitness partners with higher paces can be depicted to pass the user in accordance with their partner velocity data and their relative velocity with respect to the user, and virtual fitness partners with slower paces can come into view from behind. Similarly, as the user speeds up, virtual fitness partners with slower paces can be passed and disappear from view by the user, and virtual fitness partners with higher paces can be depicted to emerge into view from ahead.”, [0083], “[0096]… [I]f the virtual fitness partner is configured to match the pace of the user but the minimum pace is set to 2 miles per hour, the virtual fitness partner will reduce its pace to 2 miles per hour if the user stops, and will continue moving away from the user. If the maximum pace is set to 5 miles per hour and the user begins running at 5.5 miles per hour, the user may pass the virtual fitness partner.” (no effort level if the user stops and 5.5 mph effort level), and [0159]); determining an effort level of a second user moving through a second plurality of locations along a second route (Hogue: FIGs. 6A-B and 6E-F, “[0059] The location sensor 264 is operable to collect location data corresponding to, or used to determine, the current position of the user….”, “[0060] The location data can be used to track the route of the fitness activity to generate route data… that includes a plurality of timestamped locations. The location data can be used to determine navigation data with respect to a predetermined route and/or predetermined destination corresponding to the fitness activity….”, “[0078]… [T]he appearance data of the virtual fitness partner is depicted differently in accordance with the pace of the partner velocity vector of the virtual fitness partner. For example, the step stride and/or stride distance depicted in the display of a jogging, running, and/or hiking virtual fitness partner can increase and decrease as the pace of the partner velocity vector increases and decreases, respectively…. Sweat, perspiration, and/or redness of face of the virtual fitness partner displayed, and/or a heaviness of breathing or a breathing rate of the virtual fitness partner… can also increase and/or decrease with a corresponding increase and/or decrease of pace of the virtual fitness partner, and/or can increase over time, for example, when the length of the fitness activity exceeds a threshold. These features can also increase or decrease as biometric data of the user indicates increasing and/or decreasing heart rate, breath rate, and/or effort put forth by the user. Some or all of these effects can be based on transit velocity and/or biometric data corresponding to the user, and can change in response to detecting event trigger conditions.”, see also FIGs. 1-2, 3B, 3D, 3F, 3G, and 7-8, [0032], [0035], [0039], [0052], [0061], [0063], “[0081] Multiple fitness partners can be displayed simultaneously within the display region, traveling in accordance with their own partner velocity data…. As the user slows down, virtual fitness partners with higher paces can be depicted to pass the user in accordance with their partner velocity data and their relative velocity with respect to the user, and virtual fitness partners with slower paces can come into view from behind. Similarly, as the user speeds up, virtual fitness partners with slower paces can be passed and disappear from view by the user, and virtual fitness partners with higher paces can be depicted to emerge into view from ahead.”, [0083], “[0096]… [I]f the virtual fitness partner is configured to match the pace of the user but the minimum pace is set to 2 miles per hour, the virtual fitness partner will reduce its pace to 2 miles per hour if the user stops, and will continue moving away from the user….” (an effort level if the user stops), [0152], and [0159]); determining whether there is a difference between the effort level of the first user moving through the first plurality of locations along the first route and the effort level of the second user moving through the second plurality of locations along the second route (Hogue: “[0078]… [T]he appearance data of the virtual fitness partner is depicted differently in accordance with the pace of the partner velocity vector of the virtual fitness partner. For example, the step stride and/or stride distance depicted in the display of a jogging, running, and/or hiking virtual fitness partner can increase and decrease as the pace of the partner velocity vector increases and decreases, respectively…. Sweat, perspiration, and/or redness of face of the virtual fitness partner displayed, and/or a heaviness of breathing or a breathing rate of the virtual fitness partner… can also increase and/or decrease with a corresponding increase and/or decrease of pace of the virtual fitness partner, and/or can increase over time, for example, when the length of the fitness activity exceeds a threshold. These features can also increase or decrease as biometric data of the user indicates increasing and/or decreasing heart rate, breath rate, and/or effort put forth by the user. Some or all of these effects can be based on transit velocity and/or biometric data corresponding to the user, and can change in response to detecting event trigger conditions.”, see also [0055]-[0057] and [0072]-[0074] (efforts corresponding to where the user lags due to slowing pace), “[0081] Multiple fitness partners can be displayed simultaneously within the display region, traveling in accordance with their own partner velocity data…. As the user slows down, virtual fitness partners with higher paces can be depicted to pass the user in accordance with their partner velocity data and their relative velocity with respect to the user, and virtual fitness partners with slower paces can come into view from behind. Similarly, as the user speeds up, virtual fitness partners with slower paces can be passed and disappear from view by the user, and virtual fitness partners with higher paces can be depicted to emerge into view from ahead.”, “[0096]… [I]f the virtual fitness partner is configured to match the pace of the user but the minimum pace is set to 2 miles per hour, the virtual fitness partner will reduce its pace to 2 miles per hour if the user stops, and will continue moving away from the user….” (efforts corresponding to if the user stops), and [0106]-[0107]); and in response to determining that there is a difference, updating a pace indicator (140 in FIG. 3B) (Hogue: “[0078]… [T]he appearance data of the virtual fitness partner is depicted differently in accordance with the pace of the partner velocity vector of the virtual fitness partner. For example, the step stride and/or stride distance depicted in the display of a jogging, running, and/or hiking virtual fitness partner can increase and decrease as the pace of the partner velocity vector increases and decreases, respectively…. Sweat, perspiration, and/or redness of face of the virtual fitness partner displayed, and/or a heaviness of breathing or a breathing rate of the virtual fitness partner… can also increase and/or decrease with a corresponding increase and/or decrease of pace of the virtual fitness partner, and/or can increase over time, for example, when the length of the fitness activity exceeds a threshold. These features can also increase or decrease as biometric data of the user indicates increasing and/or decreasing heart rate, breath rate, and/or effort put forth by the user. Some or all of these effects can be based on transit velocity and/or biometric data corresponding to the user, and can change in response to detecting event trigger conditions.” and “[0096]… [I]f the virtual fitness partner is configured to match the pace of the user but the minimum pace is set to 2 miles per hour, the virtual fitness partner will reduce its pace to 2 miles per hour if the user stops, and will continue moving away from the user….”, see also [0055]-[0057] and [0072]-[0074], “[0081] Multiple fitness partners can be displayed simultaneously within the display region, traveling in accordance with their own partner velocity data…. As the user slows down, virtual fitness partners with higher paces can be depicted to pass the user in accordance with their partner velocity data and their relative velocity with respect to the user, and virtual fitness partners with slower paces can come into view from behind. Similarly, as the user speeds up, virtual fitness partners with slower paces can be passed and disappear from view by the user, and virtual fitness partners with higher paces can be depicted to emerge into view from ahead.”, and [0106]-[0107]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to include: the features taught by Hogue, such that Viner as modified teaches: the method comprising (method of Viner combined with the method of Hogue): determining an effort level of the first user moving through the first plurality of locations along the first route (first user, first plurality of locations, and first route of Viner combined with the effort level of the first user, first plurality of locations, and first route of Hogue); determining an effort level of the second user moving through the second plurality of locations along the second route (second user, second plurality of locations, and second route of Viner combined with the effort level of the second user, second plurality of locations, and second route of Hogue); determining whether there is a difference between the effort level of the first user moving through the first plurality of locations along the first route and the effort level of the second user moving through the second plurality of locations along the second route (first and second users, first and second plurality of locations, and first and second routes of Viner combined with determining whether there is a difference between the effort levels of the first and second users, first and second plurality of locations, and first and second routes of Hogue); and in response to determining that there is a difference, updating the pace indicator (pace indicator of Viner combined with in response to determining and updating the pace indicator of Hogue), to motivate a first user to improve fitness by keeping pace with a second user exerting an effort level higher than that of the first user. Regarding claim 15, Viner is modified in the same manner and for the same reason set forth in the discussion of claim 5 above. Thus, claim 15 is rejected under similar rationale as claim 5 above. Regarding claim 16, Viner is modified in the same manner and for the same reason set forth in the discussion of claim 6 above. Thus, claim 16 is rejected under similar rationale as claim 6 above. Conclusion Applicants’ amendments necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicants are reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to K. Kiyabu whose telephone number is (571) 270-7836. The examiner can normally be reached Monday to Thursday 9:00 A.M. - 5:00 P.M. EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Temesghen Ghebretinsae, can be reached at (571) 272-3017. The fax number for the organization where this application or proceeding is assigned is (571) 273-8300. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicants are encouraged to use the USPTO Automated Interview Request (AIR) at https://www.uspto.gov/patents/uspto-automated-interview-request-air-form. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /K. K./ Examiner, Art Unit 2626 /TEMESGHEN GHEBRETINSAE/Supervisory Patent Examiner, Art Unit 2626 1/15/26
Read full office action

Prosecution Timeline

Oct 31, 2022
Application Filed
Dec 01, 2024
Non-Final Rejection — §102, §103
May 09, 2025
Response Filed
Jun 14, 2025
Final Rejection — §102, §103
Sep 17, 2025
Request for Continued Examination
Sep 19, 2025
Response after Non-Final Action
Sep 25, 2025
Non-Final Rejection — §102, §103
Dec 31, 2025
Response Filed
Jan 12, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591324
DISPLAY DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12586498
DISPLAY APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12585337
AUGMENTED REALITY EXPERIENCES WITH OBJECT MANIPULATION
2y 5m to grant Granted Mar 24, 2026
Patent 12578807
METHODS AND SYSTEMS FOR CORRECTING USER INPUT
2y 5m to grant Granted Mar 17, 2026
Patent 12578785
INFORMATION PROCESSING APPARATUS, METHOD FOR PROCESSING INFORMATION, AND PROGRAM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
57%
Grant Probability
97%
With Interview (+39.8%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 373 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month