Prosecution Insights
Last updated: April 19, 2026
Application No. 18/790,974

Method and Device for Visual Augmentation of Sporting Events

Non-Final OA §101§103§DP
Filed
Jul 31, 2024
Examiner
WILSON, NICHOLAS R
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
1y 12m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
467 granted / 537 resolved
+25.0% vs TC avg
Moderate +12% lift
Without
With
+12.1%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 12m
Avg Prosecution
25 currently pending
Career history
562
Total Applications
across all art units

Statute-Specific Performance

§101
9.5%
-30.5% vs TC avg
§103
41.1%
+1.1% vs TC avg
§102
24.0%
-16.0% vs TC avg
§112
14.8%
-25.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 537 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation - 35 USC § 101 The limitations “in response to determining, at a second time after the first time, that the object is not within a field-of-view of the image sensor, displaying, on the display in association with the physical environment, a representation of the current state; and at a third time after the second time, ceasing to display the representation of the current state.” are considered a practical application of displaying augmented reality information to a user based on previously captured information which is no longer in view. Statutory Double Patenting A rejection based on double patenting of the “same invention” type finds its support in the language of 35 U.S.C. 101 which states that “whoever invents or discovers any new and useful process... may obtain a patent therefor...” (Emphasis added). Thus, the term “same invention,” in this context, means an invention drawn to identical subject matter. See Miller v. Eagle Mfg. Co., 151 U.S. 186 (1894); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Ockert, 245 F.2d 467, 114 USPQ 330 (CCPA 1957). A statutory type (35 U.S.C. 101) double patenting rejection can be overcome by canceling or amending the claims that are directed to the same invention so they are no longer coextensive in scope. The filing of a terminal disclaimer cannot overcome a double patenting rejection based upon 35 U.S.C. 101. Claim 20 is/are rejected under 35 U.S.C. 101 as claiming the same invention as that of claim 20 of prior U.S. Patent No. 12,094,206. This is a statutory double patenting rejection. Nonstatutory Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-11, 13-15, 17, and 18 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 5-12, 13, 15, 16 of U.S. Patent No. 12,094,206. Although the claims at issue are not identical, they are not patentably distinct from each other because the notion of the claims does refer to the same invention and claim 1 of the current application corresponds with claim 1 of U.S. Patent No. 12,094,206. Claim 1 of U.S. Patent No. 12,094,206 anticipates claim 1 of the current application because it includes all of the limitations of claim 1 of the current application. Below is limitation mapping between claim 1 of the current application and U.S. Patent No. 12,094,206. Current Application U.S. Patent No. 12,094,206 1. A method comprising: at a device including an image sensor, a display, one or more processors, and non- transitory memory: obtaining, using the image sensor, a first image, taken at a first time, of a physical environment; detecting, in the first image, an object indicating a current state; in response to determining, at a second time after the first time, that the object is not within a field-of-view of the image sensor, displaying, on the display in association with the physical environment, a representation of the current state; and at a third time after the second time, ceasing to display the representation of the current state. 1. A method comprising: at a device including an image sensor, a display, one or more processors, and non-transitory memory: obtaining, using the image sensor, a first image, taken at a first time, of a physical environment; detecting, in the first image, an object indicating a current state; in response to determining, at a second time after the first time, that the object is not within a field-of-view of the image sensor, displaying, on the display in association with the physical environment, a representation of the current state; and in response to determining, at a third time after the second time, that the object is within the field-of-view of the image sensor, ceasing to display the representation of the current state. Below is part 1 of claim mapping between the current application and U.S. Patent No. 12,094,206. Current Application 1 2 3 4 5 6 7 8 9 12,094,206 1 5 6 7 8 9 10 11 12 Below is part 2 of claim mapping between the current application and U.S. Patent No. 12,094,206. Current Application 10, 1 11, 10, 1 13 14 15 17, 13 18, 17, 13 12,094,206 1 1 13 15 16 13 13 Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-10, 12-17 19 are rejected under 35 U.S.C. 103 as being unpatentable over Smith et al. (US 10,325,410)(Hereinafter referred to as Smith) in view of Siu et al. (“SidebARs: Improving awareness of off-screen elements in mobile augmented reality”, 2013)(Hereinafter referred to as Siu). Regarding claim 1, Smith teaches A method (Methods, systems, and techniques for enhancing a live sporting event using augmented reality are provided. See Abstract) comprising: at a device including an image sensor, a display, one or more processors, and non- transitory memory (The user is able to see and interactive with these augmentations using his or her mobile device without taking his or her eyes off of the field. In some deployments, the mobile device is a cellular smartphone with an ( optional) modified virtual headset. See Abstract)( The user can view the augmentations using the camera of the 30 phone (holding the phone up to look through the camera at the field). See col. 2, lines 28-31): obtaining, using the image sensor, a first image, taken at a first time, of a physical environment (AR Game Enhancement Application 200 begins by requesting the user to enter seat information in block 201. This is for the purpose of capturing a user's three dimensional location within the stadium. The user's seat number is mapped to (x, y, z) coordinates within a 3D model of the stadium. Once mechanism for mapping user positions to 3D coordinate space is discussed further below. In block 202, the application calibrates itself based upon user input (such as visually panning the camera of the client device) until one or more fixed landmarks are identified. For example, the user may be asked to place a visual of the field represented by a rectangle within the camera's lens view and indicate when it is found. (See, for example, FIGS. 6G and 6H described below.) In one embodiment, a green rectangle in perspective is displayed to the user and the user is requested to indicate (e.g., via tapping on the green rectangle) when it lines up with the field. This information may be used by the graphics rendering engine ( e.g., in some embodiments executing on the client's mobile device) to produce a model of the stadium field. See col. 4, lines 19-39); detecting, in the first image, an object indicating a current state (AR Game Enhancement Application 200 begins by requesting the user to enter seat information in block 201. This is for the purpose of capturing a user's three dimensional location within the stadium. The user's seat number is mapped to (x, y, z) coordinates within a 3D model of the stadium. Once mechanism for mapping user positions to 3D coordinate space is discussed further below. In block 202, the application calibrates itself based upon user input (such as visually panning the camera of the client device) until one or more fixed landmarks are identified. For example, the user may be asked to place a visual of the field represented by a rectangle within the camera's lens view and indicate when it is found. (See, for example, FIGS. 6G and 6H described below.) In one embodiment, a green rectangle in perspective is displayed to the user and the user is requested to indicate (e.g., via tapping on the green rectangle) when it lines up with the field. This information may be used by the graphics rendering engine ( e.g., in some embodiments executing on the client's mobile device) to produce a model of the stadium field. See col. 4, lines 19-39); at a second time after the first time displaying, on the display in association with the physical environment, a representation of the current state (In blocks 203 and 204, the application presents various augmentations as described in detail below. These can be "always on" types of augmentations like scrimmage lines, game status, and the like, or contextual augmentations such as penalty and punt augmentations, advertisements, player stats, and the like. Some of these augmentations may be 45 visible based upon user settings. See col., 4, lines 39-45) and at a third time after the second time, ceasing to display the representation of the current state (In blocks 203 and 204, the application presents various augmentations as described in detail below. These can be "always on" types of augmentations like scrimmage lines, game status, and the like, or contextual augmentations such as penalty and punt augmentations, advertisements, player stats, and the like. Some of these augmentations may be 45 visible based upon user settings. See col., 4, lines 39-45)(Switching to a different augmentation such as an advertisement).; but is silent to in response to determining, at a second time after the first time, that the object is not within a field-of-view of the image sensor, and at a third time after the second time, ceasing to display the representation of the current state. Siu teaches a technique of providing points of interest in a sidebar of the view to indicate which direction a particular object outside the field of view is located (Figure 3. Schematic (mockup) representation of the interface, including POIs in the camera view, and grouped off-screen POIs with minimum distance, direction and type in the sidebars. See figure 3 caption )( (3) The interface displays two transparent sidebars, one on each side of the camera view, that show several types of POIs, grouped by type and displaying the distance to the closest element of that type. Figure 3 shows a mockup of the interface, used for discussing and improving the prototype design in initial meetings. In this mockup, we can easily see the following information on the right sidebar: • There are two fire hydrants (yellow icon) and the closest one is 20 meters away, • There is one police station (green icon) 1 km away. • It is faster to look for this information by rotating left than right. The camera view (in the center of the image) presents the objects within the user's range of view: one fire truck (BX-11) and one hydrant, displayed in typical AR format with the nearest ones in bigger size and more detail, and less detail and size for the one that is further away. We can also see, in the top right, a button to select the layers of interest: the user may decide he is only interested in certain types of POIs (e.g. hydrants and fire trucks) and configure the interface to hide all other types of information. See page 39, left col.). Smith and Siu teach of presenting augmented information to a user and Sui teaches that information outside the field of view can be provided to the user to in each of the side panels to indicate which direction the user should change the field of view to see the object in the panel, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Smith with the augmentation at the edge of the field of view of Siu such that the user can identify information outside of the field of view. Regarding claim 2, Smith in view of Siu teaches The method of claim 1, wherein the object indicates the current state textually (See figure 3, left and right column distances, types direction)(Siu; Figure 3. Schematic (mockup) representation of the interface, including POIs in the camera view, and grouped off-screen POIs with minimum distance, direction and type in the sidebars. See figure 3 caption )( Siu; (3) The interface displays two transparent sidebars, one on each side of the camera view, that show several types of POIs, grouped by type and displaying the distance to the closest element of that type. Figure 3 shows a mockup of the interface, used for discussing and improving the prototype design in initial meetings. In this mockup, we can easily see the following information on the right sidebar: • There are two fire hydrants (yellow icon) and the closest one is 20 meters away, • There is one police station (green icon) 1 km away. • It is faster to look for this information by rotating left than right. The camera view (in the center of the image) presents the objects within the user's range of view: one fire truck (BX-11) and one hydrant, displayed in typical AR format with the nearest ones in bigger size and more detail, and less detail and size for the one that is further away. We can also see, in the top right, a button to select the layers of interest: the user may decide he is only interested in certain types of POIs (e.g. hydrants and fire trucks) and configure the interface to hide all other types of information. See page 39, left col.). Regarding claim 3, Smith in view of Siu teaches The method of claim 1, wherein the object indicates the current state numerically (Siu; See figure 3, left and right column distances, types direction)(Siu; Figure 3. Schematic (mockup) representation of the interface, including POIs in the camera view, and grouped off-screen POIs with minimum distance, direction and type in the sidebars. See figure 3 caption )( Siu; (3) The interface displays two transparent sidebars, one on each side of the camera view, that show several types of POIs, grouped by type and displaying the distance to the closest element of that type. Figure 3 shows a mockup of the interface, used for discussing and improving the prototype design in initial meetings. In this mockup, we can easily see the following information on the right sidebar: • There are two fire hydrants (yellow icon) and the closest one is 20 meters away, • There is one police station (green icon) 1 km away. • It is faster to look for this information by rotating left than right. The camera view (in the center of the image) presents the objects within the user's range of view: one fire truck (BX-11) and one hydrant, displayed in typical AR format with the nearest ones in bigger size and more detail, and less detail and size for the one that is further away. We can also see, in the top right, a button to select the layers of interest: the user may decide he is only interested in certain types of POIs (e.g. hydrants and fire trucks) and configure the interface to hide all other types of information. See page 39, left col.). Regarding claim 4, Smith in view of Siu teaches The method of claim 1, wherein detecting the object indicating the current state further includes determining, based on the first image, the current state (Smith; AR Game Enhancement Application 200 begins by requesting the user to enter seat information in block 201. This is for the purpose of capturing a user's three dimensional location within the stadium. The user's seat number is mapped to (x, y, z) coordinates within a 3D model of the stadium. Once mechanism for mapping user positions to 3D coordinate space is discussed further below. In block 202, the application calibrates itself based upon user input (such as visually panning the camera of the client device) until one or more fixed landmarks are identified. For example, the user may be asked to place a visual of the field represented by a rectangle within the camera's lens view and indicate when it is found. (See, for example, FIGS. 6G and 6H described below.) In one embodiment, a green rectangle in perspective is displayed to the user and the user is requested to indicate (e.g., via tapping on the green rectangle) when it lines up with the field. This information may be used by the graphics rendering engine ( e.g., in some embodiments executing on the client's mobile device) to produce a model of the stadium field. See col. 4, lines 19-39)). Regarding claim 5, Smith in view of Siu teaches The method of claim 1, wherein displaying the representation of the current state includes displaying a representation of a first state and updating the representation to display a representation of a second state (Smith; In blocks 203 and 204, the application presents various augmentations as described in detail below. These can be "always on" types of augmentations like scrimmage lines, game status, and the like, or contextual augmentations such as penalty and punt augmentations, advertisements, player stats, and the like. Some of these augmentations may be 45 visible based upon user settings. See col., 4, lines 39-45). Regarding claim 6, Smith in view of Siu teaches The method of claim 5, wherein updating the representation is based on a time elapsed (Smith; In blocks 203 and 204, the application presents various augmentations as described in detail below. These can be "always on" types of augmentations like scrimmage lines, game status, and the like, or contextual augmentations such as penalty and punt augmentations, advertisements, player stats, and the like. Some of these augmentations may be 45 visible based upon user settings. See col., 4, lines 39-45). Regarding claim 7, Smith in view of Siu teaches The method of claim 5, wherein updating the representation is based on detecting, using the image sensor, an event (Smith; The user is able to see and interactive with these augmentations using his or her mobile device without taking his or her eyes off of the field. In some deployments, the mobile device is a cellular smartphone with an ( optional) modified virtual headset. See Abstract)(Smith; The user can view the augmentations using the camera of the 30 phone (holding the phone up to look through the camera at the field). See col. 2, lines 28-31) (Smith; In blocks 203 and 204, the application presents various augmentations as described in detail below. These can be "always on" types of augmentations like scrimmage lines, game status, and the like, or contextual augmentations such as penalty and punt augmentations, advertisements, player stats, and the like. Some of these augmentations may be 45 visible based upon user settings. See col., 4, lines 39-45). Regarding claim 8, Smith in view of Siu teaches The method of claim 5, wherein updating the representation is based on data obtained via a network interface ((Smith; These images are then sent to a server-side image recognition and markup service 306 which identifies salient features in the image 309 (either using via computer vision techniques or as informed by the stats aggregation service 305). A markup is then sent back to the client's rendering engine 304, which combines information about the individual images with information about the user's position to mark up the user's field of view 309. See col. 5, lines 16-24 )(Smith; In blocks 203 and 204, the application presents various augmentations as described in detail below. These can be "always on" types of augmentations like scrimmage lines, game status, and the like, or contextual augmentations such as penalty and punt augmentations, advertisements, player stats, and the like. Some of these augmentations may be 45 visible based upon user settings. See col., 4, lines 39-45)). Regarding claim 9, Smith in view of Siu teaches The method of claim 1, further comprising receiving a user input to activate display of the representation of the current state, wherein displaying the representation of the current state is performed further in response to receiving the user input (Smith; AR Game Enhancement Application 200 begins by requesting the user to enter seat information in block 201. This is for the purpose of capturing a user's three dimensional location within the stadium. The user's seat number is mapped to (x, y, z) coordinates within a 3D model of the stadium. Once mechanism for mapping user positions to 3D coordinate space is discussed further below. In block 202, the application calibrates itself based upon user input (such as visually panning the camera of the client device) until one or more fixed landmarks are identified. For example, the user may be asked to place a visual of the field represented by a rectangle within the camera's lens view and indicate when it is found. (See, for example, FIGS. 6G and 6H described below.) In one embodiment, a green rectangle in perspective is displayed to the user and the user is requested to indicate (e.g., via tapping on the green rectangle) when it lines up with the field. This information may be used by the graphics rendering engine ( e.g., in some embodiments executing on the client's mobile device) to produce a model of the stadium field. See col. 4, lines 19-39). Regarding claim 10, Smith in view of Siu teaches The method of claim 1, wherein ceasing to display the representation of the current state is performed in response to a trigger (Turning off the display system). Regarding claim 12, Smith in view of Siu teaches The method of claim 10, wherein the trigger includes a user input (Turning off the display system). Regarding claim 13, Smith teaches A device (The user is able to see and interactive with these augmentations using his or her mobile device without taking his or her eyes off of the field. In some deployments, the mobile device is a cellular smartphone with an ( optional) modified virtual headset. See Abstract)( The user can view the augmentations using the camera of the 30 phone (holding the phone up to look through the camera at the field). See col. 2, lines 28-31) comprising: an image sensor (The user is able to see and interactive with these augmentations using his or her mobile device without taking his or her eyes off of the field. In some deployments, the mobile device is a cellular smartphone with an ( optional) modified virtual headset. See Abstract)( The user can view the augmentations using the camera of the 30 phone (holding the phone up to look through the camera at the field). See col. 2, lines 28-31); a display (The user is able to see and interactive with these augmentations using his or her mobile device without taking his or her eyes off of the field. In some deployments, the mobile device is a cellular smartphone with an ( optional) modified virtual headset. See Abstract)( The user can view the augmentations using the camera of the 30 phone (holding the phone up to look through the camera at the field). See col. 2, lines 28-31); a non-transitory memory (The user is able to see and interactive with these augmentations using his or her mobile device without taking his or her eyes off of the field. In some deployments, the mobile device is a cellular smartphone with an ( optional) modified virtual headset. See Abstract)( The user can view the augmentations using the camera of the 30 phone (holding the phone up to look through the camera at the field). See col. 2, lines 28-31); and one or more processors (The user is able to see and interactive with these augmentations using his or her mobile device without taking his or her eyes off of the field. In some deployments, the mobile device is a cellular smartphone with an ( optional) modified virtual headset. See Abstract)( The user can view the augmentations using the camera of the 30 phone (holding the phone up to look through the camera at the field). See col. 2, lines 28-31) to: obtain, using the image sensor, a first image, taken at a first time, of a physical environment (AR Game Enhancement Application 200 begins by requesting the user to enter seat information in block 201. This is for the purpose of capturing a user's three dimensional location within the stadium. The user's seat number is mapped to (x, y, z) coordinates within a 3D model of the stadium. Once mechanism for mapping user positions to 3D coordinate space is discussed further below. In block 202, the application calibrates itself based upon user input (such as visually panning the camera of the client device) until one or more fixed landmarks are identified. For example, the user may be asked to place a visual of the field represented by a rectangle within the camera's lens view and indicate when it is found. (See, for example, FIGS. 6G and 6H described below.) In one embodiment, a green rectangle in perspective is displayed to the user and the user is requested to indicate (e.g., via tapping on the green rectangle) when it lines up with the field. This information may be used by the graphics rendering engine ( e.g., in some embodiments executing on the client's mobile device) to produce a model of the stadium field. See col. 4, lines 19-39); detect, in the first image, an object indicating a current state (AR Game Enhancement Application 200 begins by requesting the user to enter seat information in block 201. This is for the purpose of capturing a user's three dimensional location within the stadium. The user's seat number is mapped to (x, y, z) coordinates within a 3D model of the stadium. Once mechanism for mapping user positions to 3D coordinate space is discussed further below. In block 202, the application calibrates itself based upon user input (such as visually panning the camera of the client device) until one or more fixed landmarks are identified. For example, the user may be asked to place a visual of the field represented by a rectangle within the camera's lens view and indicate when it is found. (See, for example, FIGS. 6G and 6H described below.) In one embodiment, a green rectangle in perspective is displayed to the user and the user is requested to indicate (e.g., via tapping on the green rectangle) when it lines up with the field. This information may be used by the graphics rendering engine ( e.g., in some embodiments executing on the client's mobile device) to produce a model of the stadium field. See col. 4, lines 19-39); at a second time after the first time display, on the display in association with the physical environment, a representation of the current state (In blocks 203 and 204, the application presents various augmentations as described in detail below. These can be "always on" types of augmentations like scrimmage lines, game status, and the like, or contextual augmentations such as penalty and punt augmentations, advertisements, player stats, and the like. Some of these augmentations may be 45 visible based upon user settings. See col., 4, lines 39-45); and at a third time after the second time, cease to display the representation of the current state (In blocks 203 and 204, the application presents various augmentations as described in detail below. These can be "always on" types of augmentations like scrimmage lines, game status, and the like, or contextual augmentations such as penalty and punt augmentations, advertisements, player stats, and the like. Some of these augmentations may be 45 visible based upon user settings. See col., 4, lines 39-45)(Switching to a different augmentation such as an advertisement), but is silent to in response to determining, at a second time after the first time, that the object is not within a field-of-view of the image sensor, Siu teaches a technique of providing points of interest in a sidebar of the view to indicate which direction a particular object outside the field of view is located (Figure 3. Schematic (mockup) representation of the interface, including POIs in the camera view, and grouped off-screen POIs with minimum distance, direction and type in the sidebars. See figure 3 caption )( (3) The interface displays two transparent sidebars, one on each side of the camera view, that show several types of POIs, grouped by type and displaying the distance to the closest element of that type. Figure 3 shows a mockup of the interface, used for discussing and improving the prototype design in initial meetings. In this mockup, we can easily see the following information on the right sidebar: • There are two fire hydrants (yellow icon) and the closest one is 20 meters away, • There is one police station (green icon) 1 km away. • It is faster to look for this information by rotating left than right. The camera view (in the center of the image) presents the objects within the user's range of view: one fire truck (BX-11) and one hydrant, displayed in typical AR format with the nearest ones in bigger size and more detail, and less detail and size for the one that is further away. We can also see, in the top right, a button to select the layers of interest: the user may decide he is only interested in certain types of POIs (e.g. hydrants and fire trucks) and configure the interface to hide all other types of information. See page 39, left col.). Smith and Siu teach of presenting augmented information to a user and Sui teaches that information outside the field of view can be provided to the user to in each of the side panels to indicate which direction the user should change the field of view to see the object in the panel, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Smith with the augmentation at the edge of the field of view of Siu such that the user can identify information outside of the field of view. Regarding claim 14, Smith in view of Siu teaches The device of claim 13, wherein the object indicates the current state textually (Siu; See figure 3, left and right column distances, types direction)(Siu; Figure 3. Schematic (mockup) representation of the interface, including POIs in the camera view, and grouped off-screen POIs with minimum distance, direction and type in the sidebars. See figure 3 caption )( Siu; (3) The interface displays two transparent sidebars, one on each side of the camera view, that show several types of POIs, grouped by type and displaying the distance to the closest element of that type. Figure 3 shows a mockup of the interface, used for discussing and improving the prototype design in initial meetings. In this mockup, we can easily see the following information on the right sidebar: • There are two fire hydrants (yellow icon) and the closest one is 20 meters away, • There is one police station (green icon) 1 km away. • It is faster to look for this information by rotating left than right. The camera view (in the center of the image) presents the objects within the user's range of view: one fire truck (BX-11) and one hydrant, displayed in typical AR format with the nearest ones in bigger size and more detail, and less detail and size for the one that is further away. We can also see, in the top right, a button to select the layers of interest: the user may decide he is only interested in certain types of POIs (e.g. hydrants and fire trucks) and configure the interface to hide all other types of information. See page 39, left col.). Regarding claim 15, Smith in view of Siu teaches The device of claim 13, wherein the one or more processors are to display the representation of the current state by displaying a representation of a first state and updating the representation to display a representation of a second state (Smith; In blocks 203 and 204, the application presents various augmentations as described in detail below. These can be "always on" types of augmentations like scrimmage lines, game status, and the like, or contextual augmentations such as penalty and punt augmentations, advertisements, player stats, and the like. Some of these augmentations may be 45 visible based upon user settings. See col., 4, lines 39-45). Regarding claim 16, Smith in view of Siu teaches The device of claim 13, wherein the one or more processors are further to receive a user input to activate display of the representation of the current state and display the representation of the current state further in response to receiving the user input (Smith; AR Game Enhancement Application 200 begins by requesting the user to enter seat information in block 201. This is for the purpose of capturing a user's three dimensional location within the stadium. The user's seat number is mapped to (x, y, z) coordinates within a 3D model of the stadium. Once mechanism for mapping user positions to 3D coordinate space is discussed further below. In block 202, the application calibrates itself based upon user input (such as visually panning the camera of the client device) until one or more fixed landmarks are identified. For example, the user may be asked to place a visual of the field represented by a rectangle within the camera's lens view and indicate when it is found. (See, for example, FIGS. 6G and 6H described below.) In one embodiment, a green rectangle in perspective is displayed to the user and the user is requested to indicate (e.g., via tapping on the green rectangle) when it lines up with the field. This information may be used by the graphics rendering engine ( e.g., in some embodiments executing on the client's mobile device) to produce a model of the stadium field. See col. 4, lines 19-39). Regarding claim 17, Smith in view of Siu teaches The device of claim 13, wherein the one or more processors are to cease to display the representation of the current state in response to a trigger (Turning off the display system). Regarding claim 19, Smith in view of Siu teaches the device of claim 17, wherein the trigger includes a user input (Turning off the display system). Allowable Subject Matter Claims 11 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and overcoming the double patenting rejections outlined above. The following is a statement of reasons for the indication of allowable subject matter: The prior art of record alone or in combination is silent to the limitations “wherein the trigger includes a determination that the object is within the field-of-view of the image sensor.”, of claim 11 when read in light of the rest of the limitations of claim 11 and the claims to which claim 11 depends and thus claim 11 contains allowable subject matter. The prior art of record alone or in combination is silent to the limitations “wherein the trigger includes a determination that the object is within the field-of-view of the image sensor. ”, of claim 18 when read in light of the rest of the limitations of claim 18 and the claims to which claim 18 depends and thus claim 18 contains allowable subject matter. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS R WILSON whose telephone number is (571)272-0936. The examiner can normally be reached M-F 7:30-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (572)-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NICHOLAS R WILSON/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jul 31, 2024
Application Filed
Jan 23, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602869
APPARATUS, SYSTEMS AND METHODS FOR PROCESSING IMAGES
2y 5m to grant Granted Apr 14, 2026
Patent 12602891
TELEPORTATION SYSTEM COMBINING VIRTUAL REALITY AND AUGMENTED REALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12579605
INFORMATION PROCESSING DEVICE AND METHOD OF CONTROLLING DISPLAY DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12567215
SYSTEM AND METHOD OF CONTROLLING SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12561911
3D CAGE GENERATION USING SIGNED DISTANCE FUNCTION APPROXIMANT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+12.1%)
1y 12m
Median Time to Grant
Low
PTA Risk
Based on 537 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month